We will hold TOYOTA SYSTEMS Programming Contest 2025(AtCoder Beginner Contest 431).
- Contest URL: https://atcoder.jp/contests/abc431
- Start Time: http://www.timeanddate.com/worldclock/fixedtime.html?iso=20251108T2100&p1=248
- Duration: 100 minutes
- Writer: toam, MMNMM
- Tester: sounansya, cn449
- Rated range: ~ 1999
- The point values: 100-200-300-400-500-500-575
We are looking forward to your participation!








Hard E,easy FG.
Seems so.
I don't think so. Beaucse I haven't learned those skills.
do not agree
Good luck for everyone!
Hope I can AC problem F!
Why am I never able to solve after C? I get stuck at D and then I lose the motivation to even look at EFG
You can study some algorithm like dp and so on.
Screencast (solving A-F)
https://youtu.be/vaY0QXko98Y
My approach for D:
hard E,easy F
It will be easy to think when you come up with a strong conclusion. Especially when you know there were definitely no alternatives.
RE C, because of long long
Could E be solved with BFS 1-0 ?
Yes, because you either change (cost 1) or not change (cost 0) a mirror.
what's the most elegant way to solve D?
Knapsack.
dp[w_body — w_head] = max happiness
You either increase disbalance (attach to the body), or decrease it (attach to the head)
In F , feeling was there are lot of contrainsts
Bad decision making. I thought I'd give E a try, but I couldn't finish in time while I could have solved F.
Why does an index-out-of-range behavior result in a TLE verdict? I expect an
IndexOutOfRangeExceptionbut the runtime error doesn't occur in practice.Wrong Submission
Correct Submission
Notice the size of
liarray.I read Problem C wrongly and missed a condition and I wasted a lot of time in that problem.
I don't understand why it's okay to interpret problem E as a standard shortest path problem. If I take an edge with weight 1, doesn't that change the weight of other edges, since the mirror placement is modified?
From what I understand, algorithms like 0-1 BFS are applicable to fixed graphs. In problem E, taking a weighted edge seems to change the weight of other edges. How is it okay to use 0-1 BFS in this kind of a graph? Please help me understand...
I have struggled a lot with the "changed edge" affecting later results the whole time. But I think there is an intuitive reason why it doesn't matter.
Let's say you have changed an edge from type A to type B. Then you go through that edge. Then at some later time, you reach that cell again. Now because you don't store the previous information, you might "again" change that edge from type A to type B, then the weight would be added twice.
But notice that when you reach that cell again, you could have just not changed that edge from type A to type B the first time, but rather from type A to type C, or just kept it as type A, which would lead you to the intended direction the second time you reach the cell.
The conclusion is: Changing an edge twice would lead to a sub-optimal way to reach the intended direction, that's why you would not "add twice", but rather just keep it as it is, or only change it once.
Thanks for the explanation. It does seem like returning to an already-modified cell is always sub-optimal. I tried building several counterexamples that might prove otherwise, but they all failed. Your intuitive explanation makes total sense. I really wish the editorial made this clearer tho...
This argument has one little flaw, in the sense that, it doesn't cover the case when the directions when entering the common cell (For the first time) and exitting (After visiting it the 2nd time) are opposite (i.e. LR,RL,UD or DU), cuz then you just can't convert ,say an L to R, via one A or B or C type cell, and this is the case which troubled me a lot. But stupid me didn't realise (in time) that this case wouldn't occur if we consider the common cell as the first cell in the path which is visited >=2 times.
Changing it twice is never optimal. We are 'saved' by the fact that we only care about the shortest path.
At the end of the day, it is "only" E, so the solution shouldn't be too complex. Knowledge about complexity helps.
F = https://www.luogu.com.cn/problem/P6522
Here's a somewhat tricky but very interesting idea for problem F.
First, treat equal numbers as distinct; at the end divide by the factorial of the multiplicity of each value.
This pairwise adjacency constraint can be handled by the standard inclusion–exclusion trick: treat each condition $$$B_i - D \le B_{i+1}$$$ as either the negated condition $$$B_i - D \gt B_{i+1}$$$ or "no restriction".
Counting the arrangements that all satisfy $$$B_i - D \le B_{i+1}$$$ is hard. Instead consider arrangements that have at least one violation of $$$B_i - D \le B_{i+1}$$$ — i.e. some positions where the binary relation is $$$B_i - D \gt B_{i+1}$$$. Count those with one violating position, subtract those with two violating positions, add those with three, and so on; finally subtract this alternating sum from the $$$n!$$$ arrangements with no restrictions. This is exactly applying inclusion–exclusion by viewing each $$$B_i - D \le B_{i+1}$$$ as either $$$B_i - D \gt B_{i+1}$$$ or unconstrained. If the number of indices with $$$B_i - D \gt B_{i+1}$$$ is $$$x$$$, a configuration contributes $$$(-1)^x$$$.
Equivalently, this is the same as partitioning the array $$$A$$$ into $$$j$$$ groups so that within each group adjacent differences are $$$ \gt D$$$. Such a partition contributes $$$(-1)^{\,n-j}$$$.
Sort the array $$$A$$$ in descending order and define a DP $$$f_{i,j}$$$ as: after considering the $$$i$$$ largest elements, they are split into $$$j$$$ groups. When we process the $$$i$$$-th largest element $$$A_i$$$, suppose we can compute $$$k$$$ = the number of groups whose last element is $$$ \gt A_i+D$$$. Then there are $$$k$$$ ways to insert $$$A_i$$$ into an existing group; alternatively we can start a new group with $$$A_i$$$, and there are $$$j+1$$$ possible group positions for that (because groups can be permuted).
It is not hard to see that for fixed $$$i,j$$$ the value $$$k$$$ is fixed. More precisely: when transitioning from $$$f_{i-1,j}$$$ to $$$f_{i,*}$$$, there are $$$i-1$$$ elements already placed. Because there are $$$j$$$ groups, the first element of each group does not need a predecessor $$$ \gt A_i+D$$$, so there are $$$i-1-j$$$ elements that do require a predecessor $$$ \gt A_i+D$$$.
Using a two-pointer technique find the largest $$$t$$$ satisfying $$$A_t - D \gt A_i$$$. Clearly the first $$$t$$$ elements are eligible to be predecessors; of those we have already used $$$i-1-j$$$ as predecessors, so there remain $$$t - (i-1-j) = t - i + 1 + j$$$ available. Hence $$$k = t - i + 1 + j$$$
You might worry that a perfect matching of predecessors may not always exist (for example if those $$$t$$$ candidates are all to the right while the elements needing predecessors are all to the left), and then the above calculation might seem wrong. But that doesn't matter: in such a case necessarily $$$f_{i-1,j}=0$$$ because a valid matching cannot be formed earlier either. When a matching becomes impossible there will be a moment when $$$k=0$$$, making all subsequent states zero, so any potentially incorrect transitions have no effect on the final answer. Therefore the transition is safe.
The DP transitions follow naturally:
(Here the first line accounts for inserting $$$A_i$$$ into one of the $$$k$$$ eligible groups; the second line accounts for starting a new group.)
The final answer is
This yields an $$$O(n^2)$$$ algorithm. Can we do better?
Write the row $$$f_{i,*}$$$ as a generating function
The final answer is $$$F_n(-1)$$$.
Examine the transition in generating-function form. For the first kind of update we have
The $$$f_{i-1,j}\times (t-i+1)$$$ part corresponds to $$$(t-i+1)F_{i-1}(x)$$$, while the $$$f_{i-1,j}\times j$$$ part corresponds to $$$xF'_{i-1}(x)$$$.
The second update is
which in generating-function terms is
Combining both updates gives
Since we only need the value at $$$x=-1$$$, substitute $$$x=-1$$$ and observe that the derivative term vanishes:
This gives a very elegant $$$O(n)$$$ recurrence and completes the optimization.
I did the exact same problem of F in a (mock-CSP) school contest on 10-30.Coincidence.
Fun fact:It was placed on the first problem(which should be the easiest),but I found it the hardest and later I found that everybody else(in my computer room) simply guessed it without any proof.
E < (F,G)
Does anyone think the same as me?