A. Thor vs Loki
The strength of Thor is $$$z - x$$$, and that of Loki is $$$z - y$$$. Thus, the maximum strength is $$$\text{max}(z-x, z-y)$$$. Equivalently, we can also say the maximum strength is $$$z - \text{min}(x, y)$$$.
Complexity: $$$O(1)$$$
B. Archery
Think of what strategy to take based on the number of monsters of each type.
Think of what to do with types that have a large number of monsters. Then, think of what to do with the other ones.
The optimal strategy is:
First, shoot down all monsters of a particular type if there are more than $$$k$$$ of them. Do this until there are no more types with more than $$$k$$$ monsters.
The remaining monsters can be shot $$$k$$$ at a time.
It is easy to see that we cannot do any better. Thus, if $$$cnt$$$ represents the number of types with more than $$$k$$$ monsters, and $$$tot$$$ represents the total number of monsters of such types, then the answer is $$$cnt + \lceil \frac{n - tot}{k} \rceil$$$.
Complexity: $$$O(n)$$$
C. Homecoming
Can you think of a DP solution?
Can we calculate the shortest path to any vertex with a fixed number of edges?
We can solve this problem by a simple DP solution. Let $$$dp[i][j]$$$ denote the minimum distance between vertices $$$1$$$ and $$$i$$$ among all paths containing $$$j$$$ or fewer edges. Our transitions therefore will be:
for j from 1 to n:
for each edge from x to i:
dp[i][j] = min(dp[i][j], dp[i][j - 1], dp[x][j - 1] + j*weight_of_edge)
The transitions can be explained as follows. Whenever we are calculating $$$dp[i][j]$$$, there are 2 possible cases:
A path containing less than $$$j$$$ edges gives the shortest distance between $$$1$$$ and $$$i$$$. In this case, $$$dp[i][j] = dp[i][j-1]$$$.
A path containing exactly $$$j$$$ edges gives the shortest distance between $$$1$$$ and $$$i$$$. For this case, we can iterate over all the edges ending at $$$i$$$ which will be the $$$j$$$-th edge in our path. Thus, $$$dp[i][j] = \text{min}(dp[i][j], dp[x][j - 1] + j\times\text{weight_of_edge})$$$ as we iterate through the edges from $$$x$$$ to $$$i$$$.
Note that a shortest path may only contain at most $$$n$$$ edges. Thus, the minimum distance from $$$1$$$ to $$$i$$$ will be $$$dp[i][n]$$$.
Complexity: $$$O(nm)$$$
D. Games
What can you can say about the parity of number of stones for each player?
Does the exact value of available numbers matter?
If there are fewer even numbers than odd numbers available, then how many states are winning?
Since the question contains a lot of constraints, at first we will try to simplify these a bit.
In a move, we can add or subtract any odd number of stones. It means that if we have an even number of stones, we can go to any odd number of stones, and if we have an odd number of stones, we can go to any even number of stones. We can thus simplify the problem significantly by observing that only the count of allowed even positions $$$E$$$, and allowed odd positions $$$O$$$ matters. The exact values of positions do not matter.
Now let's do a bit of casework.
- Case 1: $$$E < O$$$. If we try to observe what happens when the starting position is even, we can quickly deduce Natasha wins, no matter what move Natasha or Scott plays after that. This is because, Natasha can only be at odd position and Scott can only be at even positions, and so he will run out of positions before Natasha does. Similarly if she starts from any odd number, she loses, no matter what moves Natasha or Scott plays. So there will be a total of $$$E$$$ winning starting positions in this case.
- Case 2: $$$O < E$$$. Proceeding as above, we can see that there are $$$O$$$ winning starting positions.
- Case 3: $$$O = E$$$. In this case, Natasha will win starting from both odd positions and even positions. Thus, there are $$$O + E$$$ winning starting positions.
So now our task is greatly simplified. We loop through all the possibilities of $$$(O, E)$$$, count how many subsets of each case exists and add them to the answer.
The count of even numbers from $$$0$$$ to $$$N$$$ is $$$\lceil \frac{N + 1}{2} \rceil$$$, which we denote by $$$P$$$. The count of odd numbers from $$$0$$$ to $$$N$$$ is $$$\lfloor \frac{N + 1}{2} \rfloor$$$, which we denote by $$$Q$$$.
So now we can compute the answer as:
- Case 1: $$$E < O$$$. Let us iterate over $$$E$$$, then:
Note that we can maintain $$$\sum_{ O={E+1} }^{Q}\binom{Q}{O}$$$ as we compute the sum, so calculating this sum only takes $$$O(N)$$$ time.
- Case 2: $$$O < E$$$. Let us iterate over $$$O$$$, then:
- Case 3: $$$O = E$$$. Let us iterate over $$$O$$$, then:
Thus, we can compute $$$\text{ans}$$$ in $$$O(N)$$$ time.
Complexity: $$$O(N)$$$
E. Ultron's Army
Can you solve it in less than quadratic time?
How can the special conditions on queries be used?
Build a tree on the query segments.
DSU on Tree!!
First, we will describe a solution based on Mo's Algorithm, that works in $$$O(N\sqrt{N})$$$.
We maintain the frequency of each number in the current range in an array $$$freq$$$, and the answer for current range $$$ans$$$. We can add a number $$$x$$$ to the range by $$$ans = ans + freq[S-x]$$$ where $$$S$$$ denotes the required sum of pairs. Similarly we can delete a number $$$x$$$ from the range by $$$ans = ans - freq[S-x]$$$. We just need to be slightly careful in the case when $$$x = S - x$$$. Overall, we can add and delete in $$$O(1)$$$ and hence, Mo's algorithm can solve this problem in $$$O(N\sqrt{N})$$$. This will exceed the time limit, unless highly optimised.
However, the query ranges satisfy some special conditions in this problem! How can we use these to our advantage?
Consider each range as one node of a tree. For each range, we find the smallest other range which covers it completely and make it the parent of this node in our tree. We also create a fake node, which will act as parent of all the ranges which are not covered by any other range. We can now use DSU on Trees to solve this problem!!
For each node, we can find its biggest child (Note: Biggest child in terms of length of the range it represents, not in subtree size). To process the answer for a node we do the following:
- Process all its children nodes (if any), not including the biggest child.
- Process the biggest child, and maintain the state of $$$freq$$$ as it is.
- Iterate from left end to right end of current range, and add each element in $$$O(1)$$$ (as in the Mo's solution), while skipping all elements in the range of its biggest child.
- Roll back the changes made to $$$freq$$$ if the current node is not the biggest child of its parent.
Building the tree can be done in $$$O(QlogQ)$$$ by sorting the queries and then using stacks, or in $$$O(Q)$$$ if we use counting sort. DSU on Tree works in $$$O(NlogN)$$$ and hence, the overall complexity of this solution is $$$O(QlogQ + NlogN)$$$, which will comfortably pass in the given time limit.
Complexity: $$$O(QlogQ + NlogN)$$$
F. Hail Hydra
Aho-Corasick.
BFS along edges in alphabetical order.
In problems where we are given a list of patterns, which are supposed to appear as substrings and we have to count the number of strings satisfying some constraints, or finding the smallest string satisfying the constraints, Aho-Corasick is almost always a good idea.
The remainder of the editorial assumes familiarity with Aho-Corasick. If you are not comfortable with it, CP-Algorithms has a really nice entry on it.
To start the solution, first build the Aho-Corasick automaton for the given strings. Also, mark all the states at which at least one of the pattern terminates. Once we have done this, we will build the exit links for each node. (Exit link points to the nearest leaf vertex reachable via suffix links — details in the above CP-Algorithms article). Using exit links we can find the exit count for each node, which is the number of leaf nodes, reachable from the current node via suffix links. In other words, these are the number of patterns, which are suffixes of the current state.
Once we have built this, the remainder of the task is pretty straightforward. We create a new graph, where each vertex corresponds to a state (vertex in Aho-Corasick, number of occurrences of patterns). Since total states in Aho-Corasick is smaller than or equal to $$$\sum_{i = 1}^{K} P_{i} + 1$$$, and the number of occurrences is smaller than or equal to $$$ N $$$, there are atmost $$$(N + 1)(\sum_{i = 1}^{K} P_{i} + 1)$$$ states in our new graph.
Now comes the part of adding edges in the graph. It is easy to observe that if we transition from state $$$A$$$ to state $$$B$$$ of Aho-Corasick and let's say exit count of $$$B$$$ is $$$x$$$, then in the graph our edge will be from $$$(A, \text{no. of occurrences})$$$ to $$$(B, \text{no. of occurrences} + x)$$$.
Finding the length of the smallest string is now an easy task. Run a BFS from $$$(0,0)$$$ and record the first time instance when the second parameter crosses $$$N$$$, i.e. we reach a state $$$(A, \text{occurrences})$$$ where $$$N \leq \text{occurrences}$$$. Finding the lexicographically smallest string is a simple addition to it. We simply run the BFS in lexicographical order, i.e. first transition from the edge corresponding to 'A', then 'B', then 'C', ... finally 'Z'. It is easy to show that this will give us the correct answer.
Complexity: $$$O(N * \sum_{i = 1}^{K} P_{i})$$$
G. Perfectly Balanced
Think of a DP solution where we maintain the contributions of $$$a_i$$$'s in the total distance.
What information we need to maintain to find the contribution of $$$a_i$$$ to the total distance?
Do you think a bitmask denoting whether we need to add $$$a_i$$$ at each position in our visit order can help?
We can solve this problem using dynamic programming. The idea here is to maintain the number of sequences of galaxies we visit, as we iterate through galaxies $$$1$$$ to $$$n-1$$$, whose contribution to the total distance by all distances $$$a_k (k \leq i)$$$ is known.
Let $$$dp[i][mask][j]$$$ be the number of sequences ($$$g_1$$$, $$$g_2 ... g_y$$$) of galaxies we visit such that:
- $$$mask$$$ is a bitmask of length $$$y$$$, which is $$$0$$$ at the $$$k$$$-th position if $$$g_k \leq i$$$ and $$$1$$$ otherwise.
- $$$j$$$ is the contribution of all distances $$$a_k$$$ (where $$$k \leq i$$$) to the total distance $$$\mod x$$$.
To write the transitions, we need the contribution of $$$a_i$$$ to the total distance, as the rest of the contribution can already be determined from $$$dp[i-1][...][...]$$$. Note that $$$a_i$$$ will contribute exactly when there is a change from $$$0$$$ to $$$1$$$ (or vice-versa) in $$$mask$$$, as this implies that we will be moving between two galaxies such that $$$a_i$$$ is included in their distance. So, lets define $$$\text{changes}(mask)$$$ as the number of pairs of adjacent bits in $$$mask$$$ with different values.
Now we have two cases:
The galaxy $$$i$$$ was never visited in the sequence. Thus, $$$dp[i][mask][j] += dp[i - 1][mask][(j - \text{changes}(mask)\times a_i)\mod x]$$$.
The galaxy $$$i$$$ was visited at the $$$k$$$-position, that is $$$g_k = i$$$. So, when looking up the dp table for $$$i-1$$$, we will need to switch on the $$$k$$$-th bit of $$$mask$$$. Let $$$mask'$$$ be the bitmask with the $$$k$$$-th bit on in $$$mask$$$. Then, $$$dp[i][mask][j] += dp[i - 1][mask'][(j - \text{changes}(mask)\times a_i)\mod x]$$$.
(We can consider $$$a_n = 0$$$ since there is no further contribution to be made). It is now clear that our answer will be $$$dp[n][0][0]$$$, as all visited galaxies will lie before or on $$$n$$$.
Complexity: $$$O(nxy2^y)$$$.
H. Groot and The Groots
Generating functions.
Try solving the easier problem of counting the number of labelled bipartite graphs on n vertices.
Did you try FFT? Try using FFT at each step to reduce complexity.
Our solution is based on generating functions. For a better understanding, you should read this wonderful book generatingfunctionology up to the third chapter. We will use some terms defined in the book, which you can review below or better yet, check out the book at page 75.
Card($$$S$$$, $$$p$$$): A card $$$C(S, p)$$$ is a pair consisting of a picture $$$p$$$ and a label set $$$S$$$ consisting of some positive integers. The weight of $$$C$$$ is $$$|S|$$$. A card of weight $$$n$$$ is called standard if its label set is {$$$1, 2, ... n$$$}.
Hand: A hand $$$H$$$ is a set of cards whose label sets form a partition of {$$$1, 2, ... n$$$}, where $$$n$$$ is called the weight of the hand. $$$h_n$$$ is the number of hands of weight $$$n$$$.
Deck: The deck $$$D_n$$$ is the set of all standard cards of weight $$$n$$$, and $$$d_n$$$ is the size of $$$D_n$$$.
As an example, we can consider a labelled connected graph of $$$n_1$$$ vertices as a card. If the vertices are labelled from $$$1$$$ to $$$n_1$$$, then it will be a standard card. A hand of size $$$n_2$$$ is any labelled graph, connected or not, consisting of $$$n_2$$$ vertices labelled from $$$1$$$ to $$$n_2$$$. Thus, $$$h_n$$$ represents the number of labelled graphs of size $$$n$$$ and $$$d_n$$$ represents the number of labelled connected graphs of size $$$n$$$, assuming standard labelling.
The main theorem we will be using here is that if $$$H(x)$$$ and $$$D(x)$$$ represent the exponential generating functions of {$$$h_i$$$}$$$_{i=0}^\infty$$$ and {$$$d_i$$$}$$$_{i=0}^\infty$$$ respectively, then:
This is actually the main theorem of the 3rd chapter in the book (page 81) and is in fact a very useful combinatorial tool. You can apply it in many different situations, a famous example being that of counting connected graphs of size $$$n$$$.
Before we get to the main problem, let us start by solving the simpler problem of counting labelled bipartite graphs consisting of $$$n$$$ vertices labelled from $$$1$$$ to $$$n$$$. We will call such labelling as standard labelling of size $$$n$$$, and such a graph as a standard graph of size $$$n$$$ from now on.
This problem is covered in the book on page 81. You can read the solution over there, or from the explanation, we have written below. We will see that this solution mostly carries over to our problem with some modifications.
Let's start by solving a simpler version of this problem. Here, we have to count the number of standard bipartite graphs of size $$$n$$$, where each vertex is coloured either red or blue. In other words, we are counting the standard bipartite graphs where we have additionally specified the colour of each partite set. For instance, a bipartite graph of $$$c$$$ components can be coloured this way in $$$2^c$$$ ways (where we decide the colour of a partite set for each component).
This is easy to count, first, we choose the colours of the vertices and then for every pair of vertices of different colours, we can either have an edge between them or not, giving us two choices for each possible edge. If we let $$$h_n$$$ denote the answer, then:
Think of what are the cards when we are considering the set of standard bipartite graphs of size $$$n$$$ with vertices coloured red/blue, as a hand of size $$$n$$$? Of course, a standard card of size $$$n$$$ is a connected such graph. And so, $$$d_n$$$ represents the number of connected such graphs of size $$$n$$$. Using our theorem, we have:
The key thing here is we can now get rid of the vertex colours! Of course, since our bipartite graphs are now connected there are only two possible red/blue colourings for the entire graph. Thus, $$$\frac{d_n}{2}$$$ gives us the number of connected bipartite graphs of size $$$n$$$ with standard labelling.
Our problem of counting standard bipartite graphs of $$$n$$$ vertices is now straightforward. We have already calculated the size of decks, {$$$\frac{d_i}{2}$$$}$$$_{i=0}^\infty$$$, and whose egf we know to be $$$\frac{\log{H(x)}}{2}$$$. Thus, the number of hands of size $$$i$$$ — the number of standard bipartite graphs of size $$$i$$$, can be found by another application of our main theorem. In particular, its egf will be:
We now come back to our original problem, wherein we need to count the number of standard bipartite graphs of size $$$n$$$ where:
- Each edge of the graph has been coloured with one of $$$p$$$ colours.
- For each component, precisely one vertex has been coloured with one of $$$k$$$ colours.
Let us review the solution of the above simpler problem. We had first calculated the number of bipartite graphs on $$$n$$$ vertices with standard labelling, where each vertex was coloured either red or blue as:
Note that instead of choosing whether an edge exists or not in $$$2$$$ ways, we can choose among $$$1+p$$$ choices for each possible edge — either it is not present or coloured in one of $$$p$$$ colours. Thus, let $$$h_n$$$ — the number of bipartite graphs on $$$n$$$ vertices with standard labelling, where each vertex is red/blue, and the edges have been coloured in one of $$$p$$$ colours is:
Let $$$d_n$$$ now denote the number of connected standard bipartite graphs of size $$$n$$$ with edges coloured in one of $$$p$$$ colours. As before, the egf of {$$$d_i$$$}$$$_{i=0}^\infty$$$ is $$$D(x) = \frac{\log{H(x)}}{2}$$$, where $$$H(x)$$$ is the egf of {$$$h_i$$$}$$$_{i=0}^\infty$$$.
We can now take care of the second type of colouring, given that we are now dealing with only connected bipartite graphs of size $$$n$$$. It is easy to see that $$$nkd_n$$$ represents the number of connected standard bipartite graphs of size $$$n$$$ with edges coloured in one of $$$p$$$ ways, and one vertex coloured in one of $$$k$$$ ways. Also, the egf of {$$$ikd_i$$$}$$$_{i=0}^\infty$$$ will be $$$kxD'(x)$$$. While coding you can just multiply each term of the sequence with $$$ik$$$, but here it is handy to write the egf as the derivative.
We are almost done! We have calculated the size of decks, {$$$ikd_i$$$}$$$_{i=0}^\infty$$$, and whose egf is $$$kxD'(x)$$$. Thus, the number of hands of size $$$n$$$ (the number of labelled standard bipartite graphs of size $$$n$$$, with edges coloured in one of $$$p$$$ colours and one vertex of each component in one of $$$k$$$ colours) can be found by another application of our main theorem. In particular, its egf will be $$$e^{kxD'(x)}$$$.
To summarize, let:
Then, the coefficient of $$$x^n$$$ in $$$G(x)$$$ multiplied by $$$n!$$$ (as $$$G(x)$$$ is egf), will give us the final answer.
Computing the polynomials
We have seen what our answer actually will be, in terms of polynomials. Here, we take a quick look at how to compute those polynomials with code. To solve our problem for $$$n$$$, it is sufficient to calculate all polynomials only up to degree $$$n$$$, as at no step above can a coefficient with degree greater than $$$n$$$ can affect the $$$n$$$-th coefficient of $$$G(x)$$$.
We will thus need to compute $$$h_i$$$ for all values from $$$1$$$ to $$$n$$$. This can be done natively in $$$O(n^2)$$$ time. The only other operation that may take more than linear time is taking the $$$\exp$$$ and $$$\log$$$ of the $$$n$$$-th degree polynomials. This has a detailed explanation on the CP-Algorithms page, but we can actually consider it as a black-box and know the implementation takes only $$$O(n\log n)$$$ time. Thus, overall $$$G(x)$$$ can be computed in $$$O(n^2)$$$ time.
We can reduce this to $$$O(n\log n)$$$ time, using the idea in this blog. As,
Thus,
Now if you consider the product:
It is clear that if we multiply the coefficient of $$$x^i$$$ in this polynomial with $$$(1+p)^{\frac{i(i-1)}{2}}$$$, we will have our egf $$$H(x)$$$. Thus, we have calculated $$$H(x)$$$ in $$$O(n\log n)$$$ time as we just needed to multiply two polynomials. The final complexity of our solution is thus $$$O(n\log n)$$$.
Complexity: $$$O(n\log n)$$$
Thanks for lightning fast editorial!!!
Hello! It would be really helpful if you release editorials for monthly contests too if possible, thank you!