Hope you enjoyed our contest! The editorial is below:
Author: DylanSmith Idea: fishy15
The problem asks to place $$$n$$$ fishies into a $$$15\times25$$$ tank (including the borders). One way that we can accomplish this is by placing a single fishy at the beginning of rows $$$2$$$ to $$$n+1$$$, and no fishies in rows $$$n+2$$$ to $$$14$$$. This works since at most $$$10$$$ rows will have fishies (one for each fishy) and we have $$$13$$$ rows to work with. Make sure to print the border of the tank properly.
Fun fact, the judge solution places the fishies uniformly at random into the tank. You can view this solution here. (It is definitely not the simplest solution, but the output looks really nice)
Author: tn757
We can use a range query data structure to solve this problem. We are interested in two values: the sum of all previous numbers that are less than $$$c_i$$$, and the count of all previous numbers that are less than $$$c_i$$$. With these two values, we may add $$$c_i \cdot cnt$$$ and $$$sum$$$ to get the sum of all previous values lesser than $$$c_i$$$, plus $$$c_i$$$. To get $$$cnt$$$ and $$$sum$$$, we can use two Segment Trees or Fenwick Trees that lets us query the count and sum of cards less than $$$c_i$$$. For the count tree, query the sum from index 1 to $$$i$$$, then update index $$$c_i$$$ to value 1 after processing the card. For the sum tree, query the sum from index 1 to $$$i$$$, then update index $$$c_i$$$ to value $$$c_i$$$ after processing the card. In total, we will make $$$n$$$ queries with each being $$$\log n$$$ complexity, letting us solve this problem in $$$O(N \log N)$$$.
Author: DylanSmith
Given a substring $$$p$$$ of $$$s$$$, we want to find the size of the union of the indices contained in substrings of $$$s$$$ that equal $$$p$$$. Intuitively we can picture the matching substrings as intervals of equal length, and we want to know how many indices are covered by the intervals.
To categorizes the substrings of $$$s$$$ by their matches, I used a suffix automaton, but a suffix array can be used with the exact same result (they both serve the same purpose here), so I will refer to the suffix automaton version. The suffix automaton has $$$O(n)$$$ nodes, and each node represents both a set of ending indices, combined with a range of lengths for which substrings ending at all such ending positions are matching. Suppose we want to only look at one node, and suppose we can afford to iterate over all of the endpoints that the node contains. Also suppose that the query string lies inside this node of the automaton. Then, if the length of the query string is $$$x$$$, we can divide the answer to the query into a sum of contributions from each endpoint in this node. Moreover, for an endpoint $$$i$$$, let $$$j$$$ be the largest endpoint in the node that is less than $$$i$$$ (or $$$-1$$$ if $$$i$$$ is the smallest endpoint). Then, the contribution of endpoint $$$i$$$ will be $$$\min(x, i - j)$$$. This is a piecewise linear function, and if we add up the individual functions for each endpoint, we obtain a function for the answer (for queries that refer to substring in this node).
We now want to find an efficient way to store this function, if we want any hope of being able to extend the idea to all nodes in the automaton. My first approach (which passes but is quite slow) uses a sparse segment tree equipped with the operations defined in this problem. The range sum is overkill since we only need a point sum to answer a query, so I implemented the segment tree such that it isn't lazy in order to reduce the overhead. This gives us the ability to add and subtract individual functions to the entire function in $$$O(\log n)$$$ time, as well as to answer queries in $$$O(\log n)$$$ time.
Now we need to find a way to extend this idea to all nodes in the automaton. The suffix link tree is the tree formed by suffix links in the suffix automaton, where the suffix link of a node is a pointer to its parent node. We also know that each endpoint appears exactly in the nodes on some path from the root to the node created upon processing the endpoint during the suffix automaton creation. So, we can build the function for lower nodes in the tree, and then obtain the function for higher nodes with small-to-large merging. When merging one function into the other, we need to consider how the individual functions change as a result of inserting new endpoints. Specifically, if we add an endpoint $$$i$$$, then we want to look for the largest endpoint in the set that is less than $$$i$$$ and the smallest endpoint in the set that is greater than $$$i$$$. Let these be $$$l$$$ and $$$r$$$ respectively, either of which may not exist. If both $$$l$$$ and $$$r$$$ exist, then we want to subtract the function $$$\min(x, r - l)$$$. Then, if $$$r$$$ exists we will add the function $$$\min(x, r - i)$$$. Likewise, if $$$l$$$ exists we will add the function $$$\min(x, i - l)$$$. This gives us a way to merge the segment trees efficiently, and since merging adds an additional $$$O(\log n)$$$ factor to the entire tree traversal, we obtain a complexity of $$$O(n \log^2n)$$$ for maintaining the functions across all nodes in the automaton.
That was the hardest part. All we have left is to determine which queries belong to which node. Since the solution already has a complexity of $$$O(n \log^2n)$$$, I just maintained priority queues in each node and did independent small-to-large merging on them. This also has complexity $$$O(n \log^2 n)$$$, but other methods such as binary lifting can be used to achieve $$$O(n \log n)$$$ complexity for this part. In total the solution has complexity $$$O(n \log^2 n)$$$.
You can see my solution using this approach here.
Merging sparse segment tree is quite overkill still for what we need to accomplish, since we can recognize that the queries will be strictly decreasing (due to the strictly decreasing size property of the suffix automaton nodes as we traverse upwards). Thus, we can sort of use slope trick, where we maintain the slope-changing points of the piecewise linear function in an ordered set, and to process a query we remove the largest slope-changing points until we reach the query size.
You can see my improved solution using this idea here.
Author: GhOsT16790
The problem involves selecting a subset of building expansion proposals along Austin's East–West highway such that no two approved proposals are closer than a given distance $$$d$$$. Each proposal is located at a distinct integer coordinate $$$x_i$$$ and has an associated profit $$$p_i$$$. The objective is to maximize the total profit while adhering to the spacing constraint, ensuring that if a proposal at $$$x_i$$$ is chosen, no other proposal within a distance $$$d$$$ from $$$x_i$$$ can be selected.
It is natural here to think about this as a dynamic programming problem with the help of a pointer to keep track of the distance requirements. Sorting is crucial because it allows the use of a two-pointer method to quickly find the last proposal that does not conflict with the current one based on the minimum distance $$$D$$$.
$$$\texttt{dp}[i]$$$ is maintained, where $$$\texttt{dp}[i]$$$ represents the maximum profit achievable considering proposals up to the $$$i$$$-th proposal (in sorted order). As the main loop iterates through each proposal $$$i$$$, a secondary pointer $$$j$$$ is used to advance through proposals that are far enough from $$$i$$$ (i.e., where the distance condition $$$\text{points}[i].\text{first} - \text{points}[j].\text{first} \ge D$$$ holds). The variable $$$\texttt{mx}$$$ keeps track of the maximum profit among these valid proposals. Consequently, the state transition is simply:
with $$$p_i$$$ being the profit of the $$$i$$$-th proposal. The overall answer is the maximum value found in the DP array.
Author: mwen
Editorial: DylanSmith
Notice that the furthest point from a given line must lie on the convex hull of the points. Furthermore, there are at most two disjoint points where the angle of the hull is closest to being parallel to the line. Thus, we can sort both the queries and the lines between points on the hull by slope, and use rotating calipers or two pointers to find the two candidates for the furthest point. We can check the distance of both candidates to obtain the answer to a query.
Author: m7a1g5i8k8a1r4p0
Consider a normal DP implementation of this problem. We can define a DP state $$$dp[i][j][k]$$$ to be the probability that Michael is headed towards position $$$i$$$, arriving there in $$$j$$$ more minutes, and the current time is $$$k$$$ minutes. This means that when $$$j$$$ is 0, Michael is at position $$$i$$$ and ready to move to another location. This way, we model each edge as 1. an instant move from one location to another and 2. a wait for a certain amount of time. The transition states would be defined in one of 2 cases:
- If Michael is at state $$$dp[i][0][k]$$$, then he will choose to go along any of the edges adjacent to $$$i$$$ with probability $$$\frac{1}{\text{deg}(i)}$$$. If he chooses to go along an edge to $$$j$$$ with length $$$l$$$, then $$$dp[j][l-1][k+1]$$$ will be incremented by $$$dp[i][0][k]\cdot\frac{1}{\text{deg}(i)}$$$.
- If Michael is at state $$$dp[i][j][k]$$$ with $$$j\neq 0$$$, then he must continue to wait at position $$$i$$$, so $$$dp[i][j-1][k+1]$$$ will be incremented by $$$dp[i][j][k]$$$.
Note that each time, we must clear out $$$dp[n][0][k]$$$ and save that to the answer because Michael stops at position $$$n$$$.
However, this DP runs in $$$O(n^2\cdot \text{max}(t_i)\cdot k) = O(5n^2k)$$$, which is around $$$1.25\cdot 10^{10}$$$, so this is too slow.
We can instead use a matrix exponentiation approach. We will define a matrix $$$M$$$ of size $$$5n\times5n$$$. Here, we will compress the original $$$i$$$ and $$$j$$$ into a single state by doing $$$5i+j$$$, and then transform the DP transitions into transitions in this new matrix. $$$M[u][v]$$$ will be the probability that Michael will be at state $$$v$$$ starting from state $$$u$$$ after 1 minute. However, we must also consider the fact that Michael will stop moving after reaching position $$$n$$$. So, we will set $$$M[u][5(n-1)]$$$ to be 0 for all states $$$u$$$. We can imagine this is Michael leaving the matrix. Then after exponentiating the matrix $$$M$$$ to the $$$k$$$th power, we can find the answer by taking $$$1-\sum_{u=0}^{5n-1} M[0][u]$$$. This is because the sum of all probabilities must equal 1, so the complement of the sum of all probabilities that Michael is at any state $$$u$$$ is the probability that Michael is at state $$$n$$$.
This approach runs in $$$O((5n)^3\log k)$$$, which is around $$$3.11\cdot 10^8$$$, so this is fast enough.
Author: fishy15
We can binary search on the answer. Suppose we want to count how many subarrays have a median value that is at most $$$M$$$. We can mark every value that is less than or equal to $$$M$$$ with a $$$1$$$ and otherwise with $$$-1$$$. Then, if a subarray has a positive sum, there must be more numbers less than or equal to $$$M$$$ than numbers greater than $$$M$$$ in the array. Since the median is the first value in the second half of the list, this means that the median is at most $$$M$$$.
Using this, we can implement our binary search check. At each index, we update our prefix sum by adding or subtracting one as described above. The number of subarrays that end at this index with a positive sum is the number of prior prefixes with a smaller sum. We can use some range query data structure such as a segtree or Fenwick tree to update which prefix sums have been seen before to do this.
Sample solution: here
Author: RandyG
Due to the fact that the shape is convex, all pixels in any row or column in the grid can be painted in one stroke. Now, consider each pixel to be an edge between two sets of vertices: the set of rows and the set of columns. This forms a bipartite graph, and the goal is to find the minimum vertex cover, which by Kőnig's theorem is equivalent to the maximum matching problem. This can be accomplished with Kuhn's algorithm in $$$O(n^3)$$$, which is fast enough.
Author: bubbarob19
Notice that is best to try to have the MEX of the pile sizes as large as possible, as when we make a move, this MEX is added to the score. It turns out we can maximize the MEX throughout the sequence of operations with a greedy construction.
The first step of the construction is to build up to the maximum possible MEX, $$$x$$$. Notice that it takes $$$x$$$ piles to make the MEX $$$x$$$, and once we perform operations such that the MEX of those $$$x$$$ piles is $$$x$$$, we can remove all of the balls in the rest of the piles freely, as all of these remove operations are optimal due to the MEX already being at its maximum possible value.
To find $$$x$$$ and optimally perform operations to $$$x$$$ piles to form this MEX, it suffices to sort the pile sizes and maintain a running MEX that we have formed so far. If we encounter any pile size that is greater than or equal to our running MEX, we can set pile size to the running MEX value (through remove operations), thus increasing the running MEX. It is optimal to build the MEX from the bottom up this way as the running MEX will be the biggest it possibly can for be each operation. For any pile that does not contribute to our MEX, we can mark it as extraneous so we can remove it later after we finish building our maximum MEX.
After building up the maximum MEX and removing extraneous piles, we now have to perform operations to remove the balls from the piles that constructed our maximum MEX ($$$0,1,\dots,x-1)$$$. To do this optimally, we can greedily remove all balls from the largest piles first, as keeping the small piles intact will allow the MEX to remain as large as possible.
After these steps, we have removed all balls and have found our maximum score.
Here is my code: Click Here
Author: ChiMasterBing
The problem asks for whether it is possible to get stuck in a building with some constant access-level keycard.
Consider answering the related problem: what is the smallest access-level required to enter each room. If we model the floorplan as a weighted directed graph, we can compute this using an approach similar to Djikstra's. Specifically, if we add edges to the priority queue ordered by access-level, then at any point, the minimum access-level required to enter some room is largest access-level popped from the priority queue.
We can use the exact same approach to compute the minimum access-level required to exit some room by reversing the edges.
Once we have computed both values, it suffices to check if the access-level required to enter $$$\leq$$$ the access-level required to access for all rooms.
K — Philadelphia Museum of Art
Author: zmonster8
We can consider the tree rooted at room 1 since they cannot return to previously visited rooms. Then, the key idea is to simulate the game using DP ideas: for each room/node $$$x$$$, we calculate the best scores Kuroni and Tfg can achieve starting from $$$x$$$, assuming optimal play. Note that Kuroni’s best score at $$$x$$$ depends on the best scores Tfg can achieve in each of $$$x$$$'s children. From those final states, Kuroni would choose the one that leads to the highest possible score for himself, even if it’s Tfg’s turn next. We calculate these values using a post-order DFS, processing each subtree before its parent.
Author: WalrusRamen21
We're asked to calculate the following:
For any two natural numbers $$$a$$$ and $$$b$$$, $$$ab = \gcd(a, b) \cdot \operatorname{lcm}(a, b)$$$. Rewriting and factoring, we get
If we group the integers in the interval $$$[1, n]$$$ by their gcd with $$$n$$$, we can simplify this process a lot. For each divisor $$$d$$$ of $$$n$$$, we can sum all the integers up to $$$n$$$ such that $$$\gcd(i, n) = d$$$, and then divide this by $$$d$$$ to add to a running sum. For each divisor $$$d$$$, all numbers such that $$$\gcd(i, n) = d$$$ can be written as $$$i = kd$$$ where $$$1 \le k \le \frac{n}{d}$$$ and $$$\gcd(k,n/d) = 1$$$. Thus,
Let $$$m = \frac{n}{d}$$$, so the set of all $$$m$$$ is equivalent to the set of all $$$d$$$. Then,
The inner summation for a given $$$m$$$ is the sum of all integers coprime to $$$m$$$. This is calculated as
which can be proven by cases (hint: realize that $$$\gcd(k, m) = 1$$$ iff $$$\gcd(m-k, m) = 1$$$). Thus, our answer is
As for the time complexity, I did my best to try figuring this out too. It takes $$$\mathcal O(\sqrt{n})$$$ time to find all the factors of $$$n$$$, and for each factor we can calculate $$$\phi(m)$$$ in $$$\mathcal O(\sqrt{m})$$$ time. We can write a very rough approximation of this upperbound as $$$\sum_{i=1}^{\sqrt{n}} \sqrt{i} = \mathcal O(n^{3/4})$$$. However in practice, it actually ends up running a lot closer to $$$\mathcal O(n^{1/2 + \epsilon})$$$ time for some non-zero $$$\epsilon$$$, allowing for the constraint to be lifted from $$$10^9$$$ to $$$10^{12}$$$ comfortably.
Here is my solution.
Author: meize
We can approach this problem using knapsack dynamic programming. Define $$$dp[i][w]$$$ as the number of ways for Randy to dump the first $$$i$$$ boxes of tea where the total weight is $$$w$$$ pounds. We use the following transition: \begin{align*} dp[i][w] = dp[i — 1][w — w_i] + dp[i — 1][w] \end{align*}
To make this solution fast enough, observe that after $$$\sqrt{\sum_{i = 1}^n w_i}$$$ seconds, Randy can arbitrarily dump the remaining boxes without worrying about the constraints placed by the party organizers. Thus, we simply double the answer for every box after this point.
here.



