Thank you for helping Marisa, Reimu, Cirno, Sanae, Amanojaku and Momoyo!
2228A - Marisa Steals Reimu's Takeout
Author: Sanae
Solution: Sanae
Lemma 1. There exists an optimal solution in which each zero is divided separately.
Proof. Suppose in some optimal solution, a zero is grouped with other elements. Separating it into a singleton group does not decrease the total score. Hence, we can transform any optimal solution into one where all zeros are divided separately without worsening the objective. $$$\blacksquare$$$
Therefore, we can restrict our attention to the subsequence consisting only of $1$s and $2$s.
Lemma 2. There exists an optimal solution in which any group containing both $$$1$$$ and $$$2$$$ is exactly either $$$[1,2]$$$ or $$$[2,1]$$$.
Proof. Consider a group that contains $$$1$$$, $$$2$$$, and possibly other elements. Splitting it into $$$[1,2]$$$ (or $$$[2,1]$$$) and the remaining part does not decrease the total score. Consequently, we can transform any optimal solution into the desired form. $$$\blacksquare$$$
By Lemma 2, we can greedily extract all $$$[1,2]$$$ or $$$[2,1]$$$ pairs from the remaining sequence.
Finally, for the leftover elements, the best strategy is clearly to take triplets $$$[1,1,1]$$$ and $$$[2,2,2]$$$ whenever possible.
2228B - Remilia Plays Soku
Author: Sanae
Solution: Sanae
We assume $$$n\geq 4$$$ in what follows.
Let us analyze the strategies of both players. Reimu aims to get as close to Remilia as possible, while Remilia aims to get as far away from Reimu as possible. Remilia will use her moves to increase the distance without being caught, and indeed it is optimal for her to use as many moves as possible.
The distance on the circle can be measured with $$$d=min(|x_1-x_2|,n-|x_1-x_2|)$$$. Once $$$d=1$$$, Remilia can increase $$$d$$$ by $$$1$$$ since the circle is large enough ($$$n\geq 4$$$). Similarly, Reimu can decrease $$$d$$$ by $$$1$$$ each turn.
Therefore, we can calculate the outcome directly. The answer is the initial distance plus the extra time gained by Remilia's moves, $$$min(|x_1-x_2|,n-|x_1-x_2|)+k$$$.
Note that when $$$n\leq 3$$$, the answer is always $$$1$$$, because Reimu can always catch Remilia in one step.
2228C1 - Cirno and Number (Easy Version) and 2228C2 - Cirno and Number (Hard Version)
The case $$$a=b$$$ is trivial. We consider two cases separately: $$$a \gt b$$$ and $$$a \lt b$$$.
We first discuss $$$a \lt b$$$ in what follows. Consider the longest common prefix of $$$a$$$ and $$$b$$$, scanning from the highest digit to the lowest significant digit. Let $$$x$$$ be the digit of $$$a$$$ be at that position and $$$y$$$ be the digit of $$$b$$$ be at that position. If $$$d_n \gt x$$$, choose $$$y$$$ as the smallest $$$d_i$$$ such that $$$d_i \gt x$$$. Then set the digits beneath in $$$b$$$ to $$$d_1$$$. This guarantees that $$$b$$$ is the smallest number greater than $$$a$$$ and the decimal representation of $$$b$$$ contains only digits from $$$d$$$. Otherwise, if $$$d_n \lt x$$$,we should change an earlier digit to satisfy the inequality. Note that we can make the number of digits in $$$b$$$ more than $$$a$$$.
Similarly, we assume $$$a \gt b$$$. Consider the longest common prefix of $$$a$$$ and $$$b$$$, scanning from the highest digit to the lowest significant digit. Let $$$x$$$ be the digit of $$$a$$$ be at that position and $$$y$$$ be the digit of $$$b$$$ be at that position. If $$$d_1 \lt x$$$, choose $$$y$$$ as the largest $$$d_i$$$ such that $$$d_i \lt x$$$. Then set the digits beneath in $$$b$$$ to $$$d_n$$$. This guarantees that $$$b$$$ is the largest number smaller than $$$a$$$ and the decimal representation of $$$b$$$ contains only digits from $$$d$$$. Otherwise, if $$$d_1 \gt x$$$,we should change an earlier digit to satisfy the inequality. Note that if $$$d=[1,2,3]$$$, then $$$b=0$$$ is invalid, because $$$0$$$ dosn't belongs to $$$d$$$.
Depending on the implementation, the time complexity can be $$$O(10n\log a)$$$, $$$O(10n\log^2 a)$$$, $$$O(n\log a)$$$ or $$$O(n\log^2 a)$$$.
2228D - Sanae, Cross and Color
Author: Sanae
Solution: Sanae, fanhuaxingyu
We can apply the principle of inclusion--exclusion to compute the answer. First, enumerate a vertical dividing line at integer coordinate $$$x$$$ from $$$1$$$ to $$$n$$$; the contribution is recalculated only when the set of points on the left side changes ~---otherwise, we skip redundant computations.
For each such $$$x$$$, we then enumerate a horizontal dividing line at integer $$$y$$$ from $$$1$$$ to $$$n$$$, while carefully excluding overlapping cases to avoid double-counting. Let the y-coordinates of the points on the left side be $$${y_1,y_2,\dots,y_k}$$$, sorted in increasing order. For consecutive y-values $$$y_i$$$ and $$$y_{i+1}$$$, all values in the form $$$k+0.5$$$ in the interval $$$(y_i+0.5,\;y_{i+1}-0.5)$$$ give the same left-side point set. Thus, the left side produces a set of y-intervals, and similarly, the right side also forms a set of y-intervals. To count all valid configurations, we need to count the number of interval-pairs (one interval from the left, one from the right) that intersect in at least one point. This can be computed efficiently using a Fenwick tree (Binary Indexed Tree) and a set data structure.
The overall time complexity is $$$O(n\log n)$$$.
An observation is that any valid dividing line must cross at least one of the two groups determined by the smaller y-coordinate.
Suppose the line crosses the group with smaller $$$x$$$ and smaller $$$y$$$. Enumerate the point whose $$$y$$$-coordinate is maximal within this group (and among those with the same maximal $$$y$$$, take the one with the larger $$$x$$$). Using appropriate pre-computation, we can efficiently determine the feasible vertical dividing position $$$x$$$.
The case where the line crosses the group with larger $$$x$$$ but smaller $$$y$$$ is symmetric to the above.
The overall time complexity is $$$O(n)$$$.
2228E1 - Amanojaku and Sequence (Easy Version) and 2228E2 - Amanojaku and Sequence (Hard Version)
Author: Sanae
Solution: Sanae
Tutorial: CirnoNine, fanhuaxingyu
First, consider solving the whole array. We can first compute the total number of possible configurations of $$$a$$$ using stars and bars. Let this number be $$$tot$$$. The remaining sum to be freely distributed is $$$sum= m - \sum_{i} a_i[a_i\neq -1]$$$, and the number of undetermined positions is $$$cnt=\sum_{i} [a_i = -1]$$$. It is obvious $$$tot=\binom{sum+cnt-1}{cnt-1}$$$. We adopt the convention $$$\binom{-1}{-1}=1$$$ to simplify boundary cases.
Denote $$$S_1=2\sum_{b \text{ is valid}}\sum_{1\leq i \lt j\leq n}b_ib_j (n+1-j)$$$ and $$$S_2=\sum_{b \text{ is valid}}\sum_{1\leq i \leq n}b_i^2 (n+1-i))$$$. We should compute each part individually.
To do the computation, we need to first introduce some useful math conclusions. We give the $$$T_0,T_1,T_2$$$.
We recall the following standard combinatorial identities: \begin{enumerate} \item If $$$n\geq 0$$$, then $$$\binom{n}{m}=\binom{n}{n-m}$$$
\item For any $$$n,m$$$,then $$$m\binom{n}{m}=n\binom{n-1}{m-1}$$$
\item If $$$n,m\geq 0$$$, then $$$n\binom{n+m}{n}=(m+1)\binom{n+m}{m+1}$$$ \end{enumerate}
The $$$T_0$$$ is classic,while $$$T_1,T_2$$$ need some work.
The computation of $$$T_1$$$ is straightforward.
Similarly, we compute $$$T_2$$$:
We now derive $$$T_3$$$:
&= (m+1)(\sum_{x=0}^{n-1} x^2\binom{x+m+1}{m+1}+2\sum_{x=0}^{n-1} x\binom{x+m+1}{m+1}+\sum_{x=0}^{n-1} \binom{x+m+1}{m+1})\\ &= (m+1)(T_2(n-1,m+1)+2T_1(n-1,m+1)+T_0(n-1,m+1)) \\\\ &\quad \text{Note: It can be further simplified, but we can just stop here.} \end{aligned} $$$
We first deal with $$$S_2$$$.$$$S_2=\sum_{1\leq i \leq n}\sum_{b \text{ is valid}}b_i^2 (n+1-i))$$$.If $$$a_i\geq 0$$$, $$$b_i=a_i$$$, so the formula can be written like $$$a_i^2(n+1-i) tot$$$. If $$$a_i=-1$$$,the formula can be written like
Second, we cope with $$$S_1$$$.
$$$S_1=2\sum_{b \text{ is valid}}\sum_{1\leq i \lt j\leq n}b_ib_j (n+1-j)$$$
After selecting $$$i,j$$$, we only need to compute $$$\sum_{b \text{ is valid}}b_ib_j$$$.
There are three cases:
\begin{enumerate} \item $$$a_i\neq -1 \text{ and } a_j\neq -1$$$.
$$$\sum_{b \text{ is valid}}b_ib_j=tot\cdot a_ia_j$$$
\item $$$a_i\neq -1 \text{ and } a_j = -1$$$, $$$a_i\neq -1 \text{ and } a_j = -1$$$.
It remains to consider $$$a_i\neq -1 \text{ and } a_j = -1$$$, because another case is similar.
It can also be computed by the expectation of each position.
\item $$$a_i = -1 \text{ and } a_j = -1$$$.
It is the trickiest part of the solution. We analyze this case carefully.
It is just a constant, which is easy to compute.
\end{enumerate}
Then, it is time to cope with $$$S_1$$$.
$$$S_1=2\sum_{b \text{ is valid}}\sum_{1\leq i \lt j\leq n}b_ib_j (n+1-j)$$$
For type $$$1$$$, it suffices to compute
.
For type $$$2$$$, it suffices to compute
.
For type $$$3$$$, it suffices to compute
.
All required quantities can be maintained using a segment tree, and the interval queries and modifications can be processed easily. Note that $$$n-j+1=(n+1)-j$$$. So process $$$X \cdot(n+1)$$$ and $$$X \cdot j$$$ in two data structures. Implementation details are omitted for brevity; see the accompanying code.
By querying on the tree, we can get the answer. And the information can also be modified efficiently.
After all, we can solve the problem in the total time complexity $$$O(1\ 000\ 000)-O((n+q)\log n)$$$ and total space complexity $$$O(1\ 000\ 000+n)$$$.
Note $$$\binom{-1}{-1}=1$$$ in the code.
For a fixed query interval, denote it by $$$b_1,b_2,\ldots,b_L$$$.
Define
and
Here, $$$A_i$$$ is the prefix sum contributed by fixed positions, and $$$T_i$$$ is the number of free positions in the first $$$i$$$ elements.
Let
The total value that must be assigned to all free positions is
If $$$N \lt 0$$$, there is no valid sequence, so the answer is $$$0$$$.
If $$$k=0$$$, there is only one possible sequence. The answer is $$$\sum_i A_i^2$$$ if $$$N=0$$$, and $$$0$$$ otherwise.
Now assume $$$k \gt 0$$$ and $$$N\ge 0$$$.
Let the values assigned to the $$$k$$$ free positions be
where
The number of such assignments is
For prefix $$$i$$$, there are exactly $$$T_i$$$ free positions inside it. Therefore, the contribution of assigned values to this prefix is
So the real prefix sum at position $$$i$$$ is
Therefore, we need to calculate
Expanding the square gives
Now we calculate these three parts separately.
First, consider the $$$A_i^2$$$ part.
Since $$$A_i$$$ does not depend on the assignment, every valid assignment contributes the same value. Hence this part contributes
Second, consider the $$$2A_iX_{T_i}$$$ part.
By symmetry, among all weak compositions of $$$N$$$ into $$$k$$$ parts, every variable has the same total contribution. Thus
Therefore,
Substituting $$$t=T_i$$$, we get
After summing over all $$$i$$$, this part contributes
Third, consider the $$$X_{T_i}^2$$$ part.
We need the second moment of
For all weak compositions of $$$N$$$ into $$$k$$$ parts, we have
and for $$$p\ne q$$$,
Since
we get
Equivalently,
Substituting $$$t=T_i$$$ and summing over all positions, the final formula is
All divisions are done modulo $$$998244353$$$ using modular inverses.
Now we explain why this can be maintained by a segment tree.
For every interval, the formula only needs the following values:
Therefore, each segment tree node maintains:
Here, $$$len$$$ is the length of the segment, $$$sum$$$ is the total fixed value in the segment, and $$$cnt$$$ is the number of $$$-1$$$ positions in the segment.
The other values are:
All prefix values are defined relative to the left end of the current segment.
When merging two adjacent segments $$$L$$$ and $$$R$$$, every prefix inside $$$L$$$ remains unchanged.
For a prefix inside $$$R$$$, the entire left segment is already before it. Therefore,
and
So the merge formulas are:
Thus, the maintained information is closed under concatenation, so it is suitable for a segment tree.
A point update only changes one leaf, and then all nodes on the path to the root are recomputed.
For a range query, we merge $$$O(\log n)$$$ segment tree nodes in left-to-right order. This gives all values required by the formula above.
After precomputing factorials and inverse factorials for combinations, the total complexity is
Solution
Let sum be the sum of all fixed values in the interval, and let cnt be the number of -1's.
If:
sum > m; orcnt = 0andsum ≠ m;
then no valid sequence exists, and the answer is 0.
The following discussion focuses only on valid cases.
For a single query, we have a simple brute-force approach with complexity $$$O(nm)$$$.
Let:
- $$$A_i$$$: The sum of fixed values among the first $$$i$$$ positions;
- $$$C_i$$$: The count of
-1's among the first $$$i$$$ positions; - $$$rem = m - sum$$$.
Then:
Here, $$$x_i$$$ represents the total sum distributed to the first $$$C_i$$$ unknown variables (the -1's).
Fixing a prefix $$$i$$$, if the unknowns in this prefix are assigned a sum of $$$j$$$, then the suffix unknowns must be assigned $$$rem-j$$$.
The number of ways is given by the classic Stars and Bars method:
However, this approach still has excessive complexity. We consider further optimization.
This recurrence uses identity transformations: - $$$n\binom{n+k-1}{k-1}= k\binom{n+k-1}{k}$$$, - $$$n(n-1)\binom{n+k-1}{k-1}= k(k+1)\binom{n+k-1}{k+1}$$$, - $$$\sum_{k=0}^{n} \binom{k}{a}\binom{n-k}{m-a}=\binom{n+1}{m+1}$$$.
Thus, we obtain an $$$O(n)$$$ solution for the problem, which corresponds to the solution for E1.
How can we optimize further?
Observing the formula above, it becomes clear that if we can quickly maintain interval information such as:
- $$$sum$$$,
- $$$cnt$$$,
- $$$\sum_{i=1}^nA_i^2$$$,
- $$$\sum_{i=1}^nA_iC_i$$$,
- $$$\sum_{i=1}^nC_i$$$,
- $$$\sum_{i=1}^nC_i^2$$$,
a Segment Tree can handle these operations within $$$O(\log n)$$$ time complexity.
Therefore, the solution for E2 is to use a Segment Tree to optimize the recurrence formula, achieving a total time complexity of $$$O(M+(n+q)\log n)$$$.
2228F - Momoyo and the Network
Author: Sanae
First Solution
We binary-search the answer $$$V$$$. The monotonicity is clear: a larger $$$V$$$ is harder to achieve, and as $$$k$$$ increases, the feasible $$$V$$$ decreases.
Root the tree arbitrarily at $$$1$$$. For a node $$$x$$$, define $$$dp(x)$$$ as the maximum length (in edges) of a downward-only path starting at $$$x$$$ such that every vertex on the path satisfies: the weight of the component containing that vertex (after cutting the path) is at least $$$V$$$.
During DFS, at node $$$x$$$, we have the $$$dp$$$-values of its children. We need to check whether we can form a path of length $$$k$$$ that passes through $$$x$$$ (or starts at $$$x$$$). This reduces to finding two distinct children $$$y_1$$$, $$$y_2$$$ such that $$$dp(y_1) + dp(y_2) + 2 \ge k$$$ (or a single child with $$$dp(y) + 1 \ge k$$$ if the path starts downward from $$$x$$$).
Sorting the $$$dp$$$-values of the children allows an $$$O(\text{deg}(x)\log \text{deg}(x))$$$ check per node via two-pointers.
The overall time complexity is: $$$O(n\log V\log n)$$$.
Second Solution
Instead of computing $$$dp$$$ by length, we can pre-compute the total weight of each subtree and sort the children of each vertex accordingly. For a candidate $$$V$$$, we can check feasibility by sorting children according to their subtree weights (or the weight of the ``outside-the-path'' component) and using a two-pointer scan to see if a $$$k$$$-edge path exists that meets the $$$V$$$ threshold everywhere.
This eliminates the $$$\log n$$$ factor from sorting $$$dp$$$-values inside the binary search.
The overall time complexity is: $$$O(n\log n+n \log V)$$$.
Third Solution
It's the same as the juan_123's, just for reference.
Apply centroid decomposition. For a centroid $$$rt$$$, any $$$k$$$-edge path either lies entirely within one of $$$rt$$$'s subtrees (handled recursively) or passes through $$$rt$$$. In the latter case, the path consists of two legs, one in each of two different subtrees of $$$rt$$$.
For each vertex $$$u$$$ in a subtree, compute the minimum component weight along the path from $$$u$$$ to $$$rt$$$ (this is the minimum of the ``remaining component weights'' for vertices on that segment). Now, choosing two vertices $$$u$$$, $$$v$$$ in different subtrees gives a path through $$$rt$$$; the overall value is the minimum of the two minima from $$$u$$$ and $$$v$$$ and the components attached to other children of $$$rt$$$.
A key observation: if the largest component (by total weight) attached to $$$rt$$$ is not included in the path, it cannot be the minimum component. We only need to consider the possibility that the path touches the largest component, or handle the case where the largest component is excluded, which simplifies the check.
Carefully pruning with this observation, and using appropriate data structures to query for pairs of legs whose total length is $$$k$$$, leads to an $$$O(n \log n)$$$ solution.
Pruning cases where $$$k$$$ exceeds the size of the current component yields improved performance.
The overall time complexity is: $$$O(n\log n)$$$.
First, we consider how to characterize the connected components formed after deleting the edges along a path, specifically focusing on the component containing a vertex $$$x$$$ on the path.
Let the root of the tree be $$$r$$$. Define $$$sz_x$$$ as the sum of weights in the subtree of $$$x$$$, $$$fa_x$$$ as the parent of $$$x$$$, and $$$tot$$$ as the sum of all $$$a_i$$$. Let LCA denote the lowest common ancestor of all points on the path. We analyze the position of $$$x$$$ on the path:
- If $$$x$$$ is an endpoint of the path but not the LCA, the edge connecting $$$x$$$ to its parent $$$fa_x$$$ is deleted, while the edges to its children remain. The size of the resulting component is
. 2. If $$$x$$$ is neither the LCA nor an endpoint, the edges to $$$fa_x$$$ and to one specific child (say $$$y$$$) are deleted. The component size is
. 3. If $$$x$$$ is the LCA, then 1 or 2 edges to its children (say $$$y$$$ and $$$z$$$) are deleted. The component size is
.
We employ Centroid Decomposition. At each step, we fix a centroid $$$r_0$$$ as the root and consider the contribution of all paths passing through $$$r_0$$$. For any such path, the LCA is $$$r_0$$$. Let $$$son_{r_0}$$$ be the set of children of $$$r_0$$$.
For contributions of type 1 and 2, we DFS the subtrees rooted at the children of $$$r_0$$$. We compute $$$val_{u,d}$$$, representing the maximum possible minimum component size if we select a path of length $$$d$$$ within the subtree of $$$u$$$ ($$$u \in son_{r_0}$$$), without yet considering the component containing the root $$$r_0$$$.
Now, consider a pair $$$(x, y, d_0, d_1)$$$ where $$$d_0 + d_1 = k$$$, $$$x, y \in son_{r_0}$$$, and $$$x \neq y$$$. This represents a path with endpoints in the subtrees of $$$x$$$ and $$$y$$$ at depths $$$d_0$$$ and $$$d_1$$$, respectively. The minimum component size for this configuration is:
Next, we handle the term $$$tot - sz_x - sz_y$$$ associated with the root component.
Note that for any $$$u$$$ and $$$d$$$,
. This is because $a_i > 0$, and $$$val$$$ represents splitting a subtree into parts, which cannot exceed the sum of the whole subtree.
Let $$$v$$$ be the child of $$$r_0$$$ with the largest subtree size ($$$sz_v$$$). If $$$x \neq v$$$ and $$$y \neq v$$$, the size of the root's component $$$s = tot - sz_x - sz_y$$$ satisfies
. Since this component fully contains the large subtree of $v$, it will never be the minimum. Thus, we can ignore the root's contribution in this case and use prefix/suffix minimums to calculate the contribution efficiently.
Otherwise, one endpoint lies in the subtree of $$$v$$$. We can enumerate the other endpoint's subtree, making the calculation straightforward.
The total time complexity is $$$O(n \log n)$$$.
A crucial detail is that during the DFS for subtree sizes, if we encounter a node that was previously a centroid root, we must add the size of the subtree blocked by that centroid to the current subtree size.








i feel bad that i missed n=3 case in B :(
I also missed that case n <= 3 I looked for all the options but didn't find a solution.
me also ;( i didn't notice this case even though i found the answer
today is more cheaters in this contest than expected cf should take strict action on this
how did people use fenwick trees for D? I tried this but got 6-7 TLEs / MLEs, despite trying to optimize constant factors, use lighter trees, rewrite in C++, etc
eventually I had to use a prefix/suffix approach to pass
That wasn't even enough for me. I ended up passing with O(n) inner loop and pragma AVX. I'm going to run some benchmarks once practice opens up and will report back
I passed easily with $$$O(n)$$$(without optimizations) and got TLE in Pretest #19 with $$$O(n \log n)$$$. I guess that's normal.
counting sort orz
yes, getting TLE with O(nlogn). But the tutorial lists O(nlogn) solution as one of the possible solution
let d be the number of digits of a,can any tell if we generate all possible numbers with the 2 given digits of length=d and check for min difference (2^18, recurrsion), i dont know digit dp, and check for the largest number with (d-1) digits and smallest number with (d+1) digits , will it work??
No because there are up to 10,000 test cases, and you might be doing 2^18 operations for each one
I wasted too much time on D because it doesn't allow $$$O(n \log n)$$$ algorithms to pass, so i didn't finish calculating E1's equation :(
Feeling bad because I succeeded to pass the example of E1 ten minutes ago, which is just 25 minutes after the contest :(
Hope that my rating won't decrease for that..
In fact, $$$O(n\log n)$$$ can pass if it is well implemented.
But I failed twice and had to use $$$O(n)$$$ instead, which made me fail to pass E1 :(
Maybe the problem F is so much similar to 2222G - Statistics on Tree?
Yes, to my surprise, too.
Maybe it is easier to that problem as we don't need to do much analysis to time complexity.
In Problem D, What is the idea behind forcing the constraints that some (N log N) solutions pass and others not ??
If the author think that this kind of forcing constraints will make the problem anti AI, I think he made it harder for humans than AI.
Initially, I only intended for the $$$O(n)$$$ solution to pass...
I feel like a more explicit decision should have been made. As of now, it seems the n log n editorial solution works only when written very tightly and in C, and java/python/etc users also get screwed.
Also visually n log n with 2 million * 21 looks like it should run fine. Even when I sped up the inner loop to O(n) (but still had std::sort and std::set to do coordinates), I still got TLE.
Make your complaint.
Stupid D.
I am a fan of wdoi, but this contest is not good enough.
E is too complex and meaningless as Div2 E. I don't think any problem with difficulty <=3000 should use so looong calculation.
And F has a very classic and simple solution: just use binary search, then use simple dp and sorting to check. You can just pass with $$$\Theta(n\log n\log V)$$$ complexity, or sort before binary search and use two-pointer to get $$$\Theta(n(\log n+\log V))$$$.
Plz bring better problems to participants at next contest of wdoi, and accept suggestions about problem. Thx.
Addition: I expect E2 worth >=2600 difficulty, but 100+ participants pass this problem, and F is easier with less pass. How many cheaters in Div2??? That is insane.
If an(/a) expert/specialist pass this E2 in 1 hour is common, I think people can still challenge AI :)
my submission:- https://mirror.codeforces.com/contest/2228/submission/374854997
can anybody tell me what's wrong with this
plz mention what approach you are using for anyone to read your code easily.
so first of all we get the string of a and remove first charecter until it is not in D
then we make two cases
b > a
in this case pick sallest number > d if no then don't compute this case and make all other numbers smallest possible
compute both numbers
and take abs difference
same is done with the second case
now when b has less digit then a just get x = number of digits in a and take maximum digit in d x — 1 time to get this value and again update answer
the case with more digits is handled this way
here sz is number of digits in a
605 4 2 5 6 9
Correct answer: 6
Use this test case to debug. It is incorrect to remove all the numbers in the beginning that match. There are cases where it is optimal to not match all the beginning digits.
Misunderstood B :(
If Remilia does not move at turn 1, k is still 1, so Remilia can move at turn 2. Not "can move before turn k", true is "can move most k turns".
Just got it, thankyou for explaining!
if initially the distance of both sides are the same, The first player stops at its place for 1 round. And then the second player would decrease the distance by moving in either direction. After the 1st round, they start chasing each other in the same direction. So it's still min_dist + k.
Yep! Misunderstood this one, as k is the time going on, Thanks for making it clear!!
But, somehow I had to add $$$1$$$ to the answer if the $$$\operatorname{abs}(dist1-dist2)\leq1$$$, because $$$\text{Remilia}$$$ would choose not to move in this case, so $$$k$$$ remains the same but the initial $$$min\_dist$$$ decreases by $$$1$$$.
I didn't like this one :(
A was fine B wasted a lot of time because I thought k was the total number of Remilia moves (it's total number of moves Remilia makes where she doesn't stay in place). It's my fault for misreading but still feels bad. C isn't that bad conceptually but it got REALLY bad once I started implementing. idk I find these kinds of questions where there isn't an insight just caseworking super boring. D looks interesting but I didn't get to it because of ABC :(
What was the intended solution for C1? Is it the same as C2? If so, why did you split C into two subtasks? The editorial's solution doesn't seem like it would be more complex to implement for 10 digits instead of 2. I started writing a brute-force solution for C1 (checking all 2^18 possibilities), but then realized that there are 10^4 tests per case and abandoned it.
I think it was just easier to think when there were 2 digits as to what needs to be done
I used binary search for C1 and C2. I don't really know how to explain it look at my submission: 374842701, if you struggled with implementation I would look at it because it's a lot simpler than greedy
would say L contest but I got paid off with expert performance
Also, are hacks disabled for any problems?
Bro wtf is your code doing? please explain
Ok, so first we define a function $$$\operatorname{gen}(x, w, b)$$$ where $$$x$$$ is the list of numbers, $$$w$$$ is the the amount of digits of the generated number, and it finds the $$$b$$$-th smallest number (0-indexed). We can do this by representing $$$b$$$ as a base $$$n = \operatorname{len}(a)$$$ mask, for example:
For each $w$ from $$$1$$$ to $$$18$$$, we will binary search for the smallest value greater than $$$a$$$ that can be created from the digits, and use $$$\operatorname{gen}$$$ to create these values. The value with the smallest absolute difference per width will either be the value found from binary search, or the generated value that is just smaller than it.
Sorry if this is unclear it's a bit difficult to explain as I said
CaseForces
C has many cornercase without effective examples.What a "wonderful" problem.
B was super obvious but unfortunately I spent a lot of time trying to find the n=3 edge case. C1 and C2 are the same solution basically and problems like this are super boring, not creative just implementation hell.
My solution discussion stream ABC2D
E1 we can solve for sum of cubes or higher powers right?
probably, would require a lot of algebra though. or maybe there is an elegant way to use the solution for squares to construct solutions for cubes.
didn't C need some heavier formalism to really define if b = 0 is always a possible solution or not? For me, at least, that wasn't clear at all; but I might be bugging.
what is the point of asking n=2 soln for c1
So, I did some testing and I don't really like D.
During the round:
After, the round, I tried a true O(n) solution with counting sort: 1.79s AC, actually was worse.
My conclusion is that I feel like the time limits were set WAY too tight, and were not super effective at discriminating between O(n) and O(n log n) at all. This problem felt like constant factor hell in C++ and I'm not sure it's even solvable (in n or n log n) in java/python. One of my friends daniel.glabai had O(n) inner loop in java and TLE19'd as well.
Can anyone share their soln. to C1? Or it would be wonderful if you could point out the problem in mine..
https://mirror.codeforces.com/contest/2228/submission/374822532
I feel in B there is a mistake for example let's take the case of 6 1 4 3
so now both are at a max possible distance and according to the submitted soln the ans comes out to be 6 but that's incorrect because when x2 moves in whichever direction x1 will move in the same and the suddenly their difference in distance decreased by 2.
so answer should be 4 instead of 6. pls correct me anyone if this is wrong !
Thanks !