Rating changes for last rounds are temporarily rolled back. They will be returned soon. ×

### emorgan's blog

By emorgan, 2 years ago,

Hi everyone!

The Final Round of Technocup 2022 starts this Sunday, March 20, 2022 at 11:00 MSK (09:00 UTC)!

#### The final results of the Technocup Final Round >>>

For those who want to compete on the same problems, we will hold a regular Codeforces Round, combined for both divisions. The round is starting at Mar/20/2022 14:35 (Moscow time).

If you are a participant of the official Technocup Finals, you are not allowed to take part in the round 778. We ask participants of the official Finals not to discuss the problems in open media till evening.

This round is possible thanks to the following people:

Score distribution: $500 - 750 - 1250 - 2000 - 2500 - 3000 - 3500 - 4000$

UPDATE: Editorial is out

UPDATE: Congratulations to the winners:

• +279

By emorgan, 3 years ago,

Prerequisites: Grundy numbers (aka nimbers, I will refer to them as Grundy numbers), the connection of Grundy numbers to mex and bitwise xor.

Note about mex: We know the the mex of a set of non-negative integers is the smallest non-negative integer which is not in the set. This definition works fine for any proper subset of $\mathbb{N}$, since $\text{mex}(S)=\min(\mathbb{N}\setminus S)$, and min is well-defined for any non-empty subset of $\mathbb{N}$. However, the minimum of an empty set is not well-defined, so neither is $\text{mex}(\mathbb{N})$, so we will define a new object $\omega=\text{mex}(\mathbb{N})$. Most of the properties of this object as it relates to mex are pretty straight-forward, for example

$\text{mex}(\mathbb{N}\cup\{\omega\}) = \omega+1$
$\text{mex}(\{a\omega+b\,|\,a,b\in\mathbb{N}\}) = \omega^2$

Games with Finite States: Given any finite impartial game (i.e. an impartial game which is guaranteed to terminate in a finite number of moves), under normal play, we can represent this game as a DAG where each vertex represents a state the game can be in, and each edge represents a move that can be made by a player. A state $v$ with out-degree 0 is losing for the player who has to make a move, otherwise it is winning iff there exists at least one edge $(v, u)$ such that $u$ is losing.

Grundy numbers generalize this concept to easily handle multiple games played in parallel. Let $g(v)$ be the Grundy number of vertex $v$, then

$g(v)=\text{mex}(\{g(u)\,|\,(v, u) \text{ exists}\})$

Then a state $v$ is winning for the player who is about to make a move, iff $g(v)>0$. Additionally, if we consider two games played in parallel such that each player first gets to choose a game on which to make a move, and then chooses any valid move in that game, then the grundy number for a pair of states $(u,v)$ from different games is $g(u)\oplus g(v)$, where $\oplus$ represents the bitwise xor. These facts together are known as the Sprague-Grundy theorem, and there are some great blogs dealing with justifications and applications of these ideas, for example this one. But ok, you already knew these things, now I will explain a slight generalization to this concept that could be useful in solving some problems.

Motivating Problem: Suppose Alice and Bob are playing a game on a matrix $A$ which contains all non-negative integer values. In one move, a player can choose any non-zero item of $A$, and decrease it by any non-zero integer. Then, they can choose any items which are to the right of the one they chose, and set them to whatever non-negative integer values they want. A player loses if it is their turn, and $A$ contains all 0s. Alice goes first. Who wins?

Note that this game is guaranteed to terminate in a finite number of moves, however that number could be arbitrarily large depending on the choices of the players. That is, it is impossible to visit the same state twice.

Games with Infinite States: Let's use $g(A)$ to denote the Grundy number of $A$. Let's first consider a simpler version of this problem where the matrix only has one row. If it also has one column, then it is equivalent to a game of nim, i.e.

$g([x])=x$

Now let's consider the case where it has two columns. In particular, let's look at the case where $A = [1, 0]$. In this position, Alice can only change $A$ to be of the form $[0, x]$ for any $x\in\mathbb{N}$, meaning

$g([1, 0]) = \text{mex}({g([0, x])\,|\,x \in \mathbb{N}})$

However, it is easy to see that leading 0s in a row don't affect the game at all, so in fact

$g([1, 0]) = \text{mex}({g([x])\,|\,x \in \mathbb{N}}) = \text{mex}(\mathbb{N}) = \omega$

Now consider the case $A = [1, x]$, for some $x\in\mathbb{N}$. Alice can move to any position $[0, y]$, but she can also move to position $[1, y]$ such that $y<x$. By an inductive argument, we get

$g([1, x]) = \text{mex}(\mathbb{N}\cup\{\omega,\omega+1,...,\omega+x-1\}) = \omega+x$

Now consider the case $A=[y,x]$. Using a similar argument as before, we get the following result:

$g([y,x]) = y\omega + x$

We can keep repeating a similar argument for more columns, and in general, for a row with many elements:

$g([a_n,...,a_1,a_0]) = \sum\limits_{i=0}^{n}a_i\omega^i$

I'm not aware if this type of game is well-known, but I'm going to call it "cascading nim", and say that a game like this has a "Grundy polynomial" (in $\omega$. Remember that $\omega$ is basically just a dummy object that we defined to be equal to $\text{mex}(\mathbb{N})$). The Grundy polynomial of a game of cascading nim is given by simply taking the values of the piles in order as coefficients.

Solving the Original Problem: We can deal with the matrix $A$ having multiple rows using the normal method of xor-ing the Grundy values of the parallel games together. To take the xor of two Grundy polynomials, we simply take their coefficient-wise xor. A state is winning iff its Grundy polynomial is 0, i.e. all of its coefficients are 0. So the answer to our problem is that Alice wins iff at least one column of $A$ has non-zero xor.

Nim with Increases: A well-known variant of nim is "nim with increases", where players are allowed to add any number of stones to a pile as an alternative to removing some, but the players are only allowed to do this a limited number of times. There are many tutorials explaining how this is equivalent to normal nim, because any time a player adds stones to a pile, their opponent can just undo their move with a normal nim move. However, using the theory of Grundy polynomials we can show the same result using just the Sprague-Grundy theorem without any appeals to arguments like "it is never optimal to...".

Suppose that we are dealing with a single pile of $n$ stones, and the players together are allowed to make at most $m$ moves of the increasing type, where they add or remove any number of stones to the pile (possibly 0), (and as many normal nim moves as they wish). Then this game is equivalent to a game of cascading nim on the array $[m\%2, n]$. The Grundy polynomial of this game is therefore $(m\%2)\omega+n$, which is winning for the first player iff either $(m\%2)>0$ or $n>0$, but more importantly means we can combine games of this type using the xor method (which we can't do using the standard argument).

Note that if each player has their own $m$, then the game is no longer impartial, so we can't use Sprague-Grundy. If increasing is forced (i.e. we can't use up an increasing move to remove stones, but increasing by 0 is allowed), then the Grundy polynomial for a state $m, n$ is given by the following piecewise function:

$\begin{cases} n-m & \text{if }n\geq m\\ \omega+m-n-1 & \text{if }n<m\\ \end{cases}$

Also, if increasing by 0 is banned, then we can assign normal Grundy numbers, since the Grundy number for a state $m, n$ is just $n$.

Solution to 1451F - Nullify The Matrix: Let's say that the matrix given in the problem is $A$. Let's construct a new matrix $B$ where each possible path from the top-left corner to the bottom-right corner in $A$ is a row in $B$, so $B$ has $n+m-1$ columns. Then playing a move on $A$ as described in the problem is equivalent to choosing a row in $B$ and playing a move of cascading nim on that row.

There is a problem, which is that some other rows of $B$ will be affected by the move as well. However, I claim that this isn't a problem, as long as we make some small modifications to how we compute the Grundy polynomial under the new rules. What we need to show is that if we play a move of cascading nim on $B$ under the new rules where some elements appear multiple times, then the new Grundy polynomial $g(B)$ will be the same as the Grundy polynomial we would expect to see under normal cascading nim rules. The change we will make is this: if the same item of $A$ appears in a column of $B$ multiple times, then we only xor it into the answer once. Now, each time we choose some row of $B$ and make a move of cascading nim on it, the value of $g(B)$ computed under the excluding-duplicates rule, where some elements are dependent, will be the same as $g(B)$ under normal rules, where all elements of $B$ are independent.

So, the Grundy polynomial for the matrix $A$ is given by taking the total xor of each bottom-left-to-top-right diagonal as coefficients, and player 1 wins the game iff at least one of these coefficients is non-zero. Note that this is a more powerful result than just knowing who is winning, since we could use it to calculate who is winning when playing on multiple matrices in parallel, even if those matrices have different dimensions.

Generalization of 1451F: Suppose we are given a DAG where each vertex has some value, and in one move, a player can choose an arbitrary directed path in the DAG, decrease the value of the first vertex in the path by a non-negative integer, and then set the values of the other vertices in the path to any integer.

Let's construct an $n\times n$ matrix $B$, initially containing all zeros. Then, for each vertex $v$ with index $i$, let $l$ be the length of the longest path in the DAG starting at $v$, and set $B_{i,n-l}$ to $h_i$. I claim that the Grundy polynomial for cascading nim on $B$ is the same as the Grundy polynomial for the game on our DAG!

To prove that this is correct, we should show that every valid move on the DAG corresponds to a valid move on $B$, and the converse as well. To prove the forward direction, let's make any move on the DAG. Then we should make a valid move of cascading nim on $B$, such that the row we make the move on is the same as the row of the first vertex in the path we chose. Then, we can easily choose the values correctly such that it has the same effect on the Grundy polynomial of $B$ as the initial change to our DAG.

To prove the converse direction, we just apply the same argument as before in reverse. Let's make any move of cascading nim on $B$. The leftmost element in our move had to be non-zero, so it must already correspond to some vertex $v$ in the graph. Then we can just choose the longest path starting at $v$, which gives us access to every column to the right of our first element in $B$, and alter the values of these vertices such that is has the same effect on the Grundy polynomial of the DAG as our move on $B$ did.

So, the final solution is this: The vertices can be partitioned into "level sets" such that if $l$ is the length of the longest directed path starting at $v$, then $v$ belongs to level set $l$. Then, the coefficient of $\omega^i$ in the Grundy polynomial of this game is the xor of the values of the vertices in level set $i$.

Solution to 1149E - Election Promises: We can do the exact same thing as the previous problem, but instead of having $l$ be the length of the longest path starting at $v$, we will define it as follows:

$l(v) = \text{mex}(\{l(u)\,|\,(v,u)\text{ exists}\})$

(This is the same as $M(v)$ in the editorial). Then construct the matrix $B$ using the same method and get the Grundy polynomial. Then we can apply a similar argument to show that there is a correspondence between cascading nim moves on $B$ and moves on the DAG. For each move on $B$, the leftmost element we choose must be a vertex of the DAG, and then we can use that vertex to access every column to the right of it (since for every $j < l(v)$, there is an edge $(v, u)$ such that $l(u)=j$). For each move on the DAG, make a move of cascading nim on $B$ with the leftmost item being the one corresponding to the vertex that we had to decrease.

• +282

By emorgan, history, 4 years ago,

This concerns the following problem: 680D - Bear and Tower of Cubes. Here's a rough outline of my thoughts while solving this problem.

Let $f(m, x)$ be the optimal number of boxes (we can ignore the part about maximizing $X$, it's trivial to extend the solution to also accomplish this) for input $m$, where we are only allowed to use boxes of length at most $x$. The answer to the original problem is $f(m, 10^5)$, we have the base case $f(m, 0)=0$, and $f$ satisfies the recurrence

$f(m, x) = \max\begin{cases} f(m-x^3, x)+1 & \text{if we take a cube of length$x$} \\ f(\min(m, x^3-1), x-1) & \text{if we don't} \\ \end{cases}$

Evaluating $f(m, x)$ with DP is too slow for large $m$, but we can make the observation that for each pair $(m, x)$, it is either optimal to take a cube or not to take a cube, and that if it is optimal to take for $(m, x)$, then it must also be optimal to take for $(m+1, x)$. I don't have a rigorous proof for this, but it is intuitively true and AC submission confirms this. So, we can binary search on the "turning point" $p_x$ such that it is optimal to take for all $m\geq p_x$, and optimal not to take for all $m<p_x$. This yields a correct solution, but still gets TLE for large $m$.

The next observation is that since we only care about $x\leq 10^5$, we could theoretically precompute all $10^5$ values of $p_x$, which would yield a fast greedy solution. $10^5$ is a little too large for a precomputed table of long longs so this isn't really feasible. However, while computing these values, I noticed some patterns in the values of $p_x$, and began investigating various properties of the sequence, and I eventually decided to look at the sequence of $p_x-x^3$, which displayed some really weird behavior. Basically, for $x$ from $1$ to $10^5$ it only took on $12$ distinct values, which were non-decreasing. The sequence looked like

$0, 6, 15, 23, 50, 50, 114, 114, 114, 114, 330, ..., 1330, ..., 10591, ..., 215970, ..., 19464802, ..., 16542386125$

Where the later values were repeated many times. And so, I managed to use these values to encode all of the values of $p_x$, in order to get an AC submission for the problem: 84824995.

So, the obvious question here is, why does this work? I have no idea where this pattern is coming from.

• +18

By emorgan, history, 4 years ago,

When submitting a solution to a problem, I think most people get the general impression that once you pass 30-40 test cases, you are basically home free, i.e. you are highly unlikely to fail a later test case and can consider the problem solved. Despite this, many problems have upwards of 200+ test cases, which can mean several minutes of grading time in the worst case, not even taking into account time spent in the queue.

On a related note, the judging software seems to have been heavily overburdened recently, possibly linked to the coronavirus.

I think that a good feature to have would be to allow lots of test cases for problems which are relatively recently added, but once a problem has been solved by ~1000+ people, most of the test cases should be removed to allow submissions to go through quicker. In order to choose which submissions to remove, the system should keep track of how often each test case gets failed. Test cases which are almost never failed should be removed. I strongly suspect that over 50% of test cases on most problems would fall into this category, if not 80% to 90% for some problems.

Would this bring about too many false positives for submissions? (For older problems, do we even care?)

• +10