Hello, everyone!
Japanese Olympiad in Informatics (JOI) Spring Training 2025 (Contest URL) will be held from March 21st to 24th. There are 4 days in the contest.
- Day 1: March 21, 02:00 GMT — March 21, 24:00 GMT
- Day 2: March 22, 02:00 GMT — March 22, 24:00 GMT
- Day 3: March 23, 02:00 GMT — March 23, 24:00 GMT
- Day 4: March 24, 02:00 GMT — March 24, 24:00 GMT
The contest information is as follows. Details are available in contest page.
- Number of problems for each contest: 3-4 problems
- Style: IOI-Style (There may be some partial scores)
- Problem Statement: Japanese & English
- There may be some unusual task (e.g. output-only task, communication task) like IOI
The registration page and live scoreboard will be announced 10 minutes before the contest.
We welcome all participants. Good luck and have fun!
UPD1: Due to technical issue, the contest Day1 will be delayed by 30 minutes.
UPD2: Contest Day1 has finished. Now you can discuss the problem.
UPD3: Contest Day2 has finished. Now you can discuss the problem.
UPD4: Contest Day3 has finished. Now you can discuss the problem.
UPD5: Contest Day4 has finished. Thank you for your participation.








Why C++ only?
It's that time of the year again. Nice.
why are u training joi problems
Because I'm OI eligible
good luck to you
Hi hashman
Now it's less than 10min before the contest starts, but I still cannot enter the contest site (502 Bad Gateway). Will this contest be delayed again?
Update: the contest site is accessible now, thanks for your fix!
Solution for Exhibition?
Also waiting for the solution!
The problem can be reduced to iterating over $$$v$$$ from $$$n$$$ to $$$1$$$, each time we delete a lexicographically smallest set of intervals which can be covered with $$$cnt_v$$$ elements.
To check this for a certain set of intervals $$$S$$$, a simple greedy will be taking the minimum $$$r$$$ of all uncovered intervals, and placing a element with value $$$v$$$ at this position. This runs in $$$O(|S|\log|S|)$$$ by sorting and will calculate the minimum number of elements needed to cover all intervals in $$$S$$$, we will call this value $$$f(S)$$$.
We can run this greedy in both directions, from left to right, each time taking the rightmost one, and vice versa. We name the $$$i$$$-th element on these paths $$$sr_i$$$ and $$$sl_i$$$, respectively.
To calculate the maximal prefix that can be covered for some $$$v$$$, we can run brute force check on lengths $$$2^0,2^1,2^2,\cdots$$$, until we locate that the maximal prefix is in range $$$[2^l,2^{l+1})$$$. Then we can run binary search in the range, our complexity is bounded by $$$O(len\log^2 len)$$$, where $$$len$$$ is the length of the maximal prefix. This can be further optimized to $$$O(len\log len)$$$ if we sort the first $$$2^{l+1}$$$ intervals preemptively, which means that we don't have to sort during the binary search.
Suppose for some $$$v$$$, we have calculated the maximal prefix $$$S$$$ which can be covered, then it follows that $$$f(S)=cnt_v$$$ (or the trivial case that $$$S$$$ contains all intervals). To check whether a new interval can be added to $$$S$$$, we only need to check whether $$$f(S)$$$ increases.
Observation 1: $$$f(S)$$$ increases iff the new interval does not intersect with any $$$[sl_i,sr_i]$$$.
This can be inferred from the definition of $$$sr$$$ and $$$sl$$$, with some minor reasoning.
Now, with some extra effort, we will have a solution that runs in $$$O(nm)$$$. When $$$n$$$ becomes larger, we will need to find an efficient way to maintain $$$sr$$$ and $$$sl$$$.
Observation 2: Each interval will only enter and exit $$$sr$$$ at most once. The same holds for $$$sl$$$.
This is due to each $$$sr_i$$$ being decreasing over time, and that each interval can only enter $$$sr$$$ at one position, or else we can deduce that $$$f(S)$$$ has increased.
Each time we add an interval to $$$S$$$, we can run binary search to find its place among $$$sr$$$, and try to update one by one. We will need to maintain the "next" of each value with some data structures (I used a segment tree). We will perform linear updates to $$$sr$$$ and $$$sl$$$ in total.
Using the above, if we do an $$$O(m)$$$ check for all $$$v$$$, we can pass $$$a_i\le 5$$$. To optimize, the only task which remains is to locate the next interval that can be added to $$$S$$$ efficiently.
If we only need to check for a single $$$[sl_i,sr_i]$$$, that is, find the first interval which intersects with $$$[sl_i,sr_i]$$$. This can be solved by keeping two sets on a segment tree over positions $$$1,2,\cdots,n$$$, which yields a complexity of $$$O(\log^2 n)$$$ insertion/deletion, and $$$O(\log n)$$$ query.
A simple idea will be keeping a data structure containing the first interval for every $$$[sl_i,sr_i]$$$, each time we take the minimum, check if it is not already in $$$S$$$, and add it to $$$S$$$. However, this is incorrect, as an interval can be the first interval for lots of $$$[sl_i,sr_i]$$$, and we have no way of updating them all efficiently.
Observation 3: If we take the $$$sr$$$ and $$$sl$$$ of $$$S$$$, all intervals that contains at least one $$$[sl_i,sr_i]$$$ can always be added to $$$S$$$. Adding them will also have no influence on $$$sr$$$ and $$$sl$$$.
Therefore, after we calculate the maximal prefix, we take the current $$$sr$$$ and $$$sl$$$ and add all intervals satisfying the condition above. After this process, any interval will intersect with at most two $$$[sl_i,sr_i]$$$'s in the future, so we will have linear updates to this set in total.
Notice that all our insertions to the sets on segment tree structure is increasing, and only at the beginning. We only need to perform queries and deletions in the main algorithm. Therefore, we can replace sets with linked-lists to optimize it to $$$O(\log n)$$$.
Total complexity: $$$O(n\log n)$$$.
Huge thanks!
Whats the solution for Travel?
I used parallel binary search and dsu to calculate the maximum altitude you have to pass by to get from a to b,then for every position I found a position that I can reach and gives me the biggest altitude and then did binary lifting in that
I think there is a way without parallel binary search, if you sort the numbers in increasing order and maintain a DSU of reachable nodes. And you know that for each query a node will only transition to the biggest reachable node from it. Does anyone have a different solution?
How to get more than 50 points in Ambulance?
Day $$$1$$$ Solutions Write ups:
Note that the correct greedy strategy is maintain your set of reachable nodes, and jump up high from the highest point in this set, recalculate the set of reachable nodes. To optimize this, we can make edges like $$$x \rightarrow $$$ highest point in set of reachable nodes of $$$x$$$.
First of all, let's discuss how to make these edges efficiently. We can sort the points in order of their heights, and then maintain an incremental DSU. When the current processing value is $$$A_i$$$, before uniting $$$i$$$ with all its neighbours, we will process and find the maximum values of nodes of components with $$$A_x \lt A_i - K$$$. ($$$K$$$ being jumping parameter)
Now, this reduces to a problem of the form : Find the first ancestor of $$$(A, B)$$$ such that it can reach $$$(C, D)$$$. We can directly use parallel binary search to solve this. The structure needed to maintain reachability is exactly the same as previous structure.
Complexity : $$$O(N \cdot log (N) \cdot alpha(N))$$$
The idea is to build a base sequence that you can recognize, and then insert numbers into that. What i did is first make $$$01010....$$$ with exactly $$$10$$$ 0s and $$$10$$$ 1s. Then, i assigned weights to the continuous runs, with the weights being $$$2^0, 2^0, 2^1, 2^1, 2^2, 2^2, ...$$$.
Once the base sequence is built, we can easily add things to it with the appropriate weight using at most $$$log(N)$$$ additional items. (Since we have options for both $$$0$$$s and $$$1$$$s). So this gets you a solution in $$$3 \cdot log(N)$$$
Handling the cases when there are less than $$$10$$$ $$$0$$$s or $$$10$$$ $$$1$$$s is slightly tricky (or there are $$$10$$$ 0s eventually, but they appear late), and the idea is to build the sequence like normal acting as if the base was complemented, but if the base had not been complemented, we can identify that in Bruno.
Some possible optimizations for higher points:
Use base $$$3$$$ for $$$4 \cdot log_3(N)$$$ instead of $$$3 \cdot log_2(N)$$$. Instead of assigning weights, you can do better by making a bijection (or near bijection) from all sequences such that $$$x_1 + x_2 + ... + x_k = m$$$, with $$$k$$$, $$$m$$$ being parameters, taking $$$k = m = log_4(N) + log(log(N))$$$ since there are $$$C(m + k, k)$$$ such sequences. You can get around $$$18$$$ (suggested by jtnydv25)
Could you elaborate no how you use parallel binary search?
Day $$$2$$$ Solution Write ups:
Suppose that only $$$(1, 1)$$$ and $$$(L, L)$$$ existed. let $$$C_i$$$ denote the cost to take $$$i$$$ to $$$(1, 1)$$$ and $$$D_i$$$ denote the cost to take $$$i$$$ to $$$(L, L)$$$. Then, $$$C_i + D_i = 4 \cdot (L - 1)$$$, i.e. their sum is constant. This is a powerful constraint, because it means that sorted by $$$C - D$$$ is valid, and then take a prefix of those and send them to $$$(1, 1)$$$ and the others to $$$(L, L)$$$.
This solves the problem in $$$2^N$$$ if we bruteforce on the mask that goes to $$$(1, 1)$$$ or $$$(L, L)$$$ while the rest go to $$$(1, L)$$$ or $$$(L, 1)$$$ and we can greedy each side.
Let us sort the sequence by $$$cost(1, 1) - cost(L, L)$$$. Then all the positions that go to $$$(1, 1)$$$ will occur before all the positions that go to $$$(L, L)$$$. Similarly, sorting by $$$cost(1, L) - cost(L, 1)$$$ gives us a similiar result.
Now, the crucial idea is that, every solution can be described with $$$2$$$ variables $$$b_0$$$ and $$$b_1$$$ (border variables) where if an index $$$i$$$ can go to $$$(1, 1)$$$ if and only if $$$i \le b_0$$$ in the first sorted order. While $$$i$$$ can go $$$(1, L)$$$ if and only if $$$i \le b_1$$$ in the second sorted order.
This divides the group of nodes into $$$4$$$ distinct groups, each having exactly $$$2$$$ choices, for example either $$$(1, 1)$$$ or $$$(1, L)$$$.
Fix the variables $$$b_0$$$ and $$$b_1$$$ and then for each group, do a dynamic programming dp[time used up of first kind] = minimum time of second kind, of infinity. Calculate this in $$$O(NT)$$$ per state.
After this, to check whether a solution is possible, note that it is a cyclic type structure. Let us fix the amount of $$$(1, 1)$$$ used up in the group of $$${(1, 1), (1, L)}$$$. This will also tell me how much I have left of $$$(1, L)$$$ which I can only use in the group of $$${(1, L), (L, L)}$$$, which then gives me the value for $$${(L, L), (L, 1)}$$$ and that finally gives me the value for $$${(L, 1), (1, 1)}$$$. Check for no infinity results and the sum of $$$(1, 1)$$$ used is $$$\le T$$$.
Naively, this is $$$O(N^3 \cdot T)$$$ since we fix $$$b_1$$$ (gets $$$81$$$ points surprisingly), but we can make this $$$O(N^2 \cdot T)$$$ by building prefix and suffix dps, instead of fixing $$$p_1$$$ since there are not many updates.
Note that either $$$(A, B)$$$ or $$$(B, A)$$$ is always achievable. The bad counter is the number of pairs such that both is not achievable. For this, either both $$$A$$$ appear before both $$$B$$$ or vice versa.
Further, the claim is that in $$$1$$$ move, we can always make the bad counter smaller by $$$1$$$. Proof : Consider the right endpoint of some $$$A$$$, say $$$X$$$, such that there exists $$$B$$$ whose first occurrence appears to the right of second occurrence of $$$A$$$. Either $$$X + 1$$$ is the right endpoint of some colour, or $$$X + 1$$$ is the left endpoint of some colour. If it is the left endpoint, swap and we are done. If it is the right endpoint, reconsider the process with $$$X + 1$$$ being our new variable $$$X$$$, and we are still guaranteed to have some colour completely on the right. Eventually this process stops.
Now, we need to calculate $$$D_i$$$ = number of good pairs for each cyclic subarray $$$[i, i + 2 \cdot N)$$$. This is fairly standard through a fenwick sweepline. For answering queries, there are $$$2$$$ options, either take the minimum cost of someone who has $$$D_i \ge K$$$ (trivial) or take the minimum cost of someone with $$$D_i \lt K$$$ in which case the cost is $$$C_i + (K - D_i) \cdot X$$$, i.e. store minimum value of $$$C_i - D_i \cdot X$$$ and add $$$K \cdot X$$$ to it.
Everything can be done in $$$O((N + Q) \cdot log(N))$$$.
Is there a solution with better complexity than O(n^2 * t) in Ambulance?
After 10 years of trying I finally won JOISC 2025 Open!! So happy!!
Congratulations! By the way, I(a.k.a tplerasp) am also extremely excited to get a runner-up place, though 70 points lower that you :)
rank 12.How do you 900+ guy's mind work??u r too strong
Thank you for the nice problems. Overall I enjoyed the contest very much. Spending $$$8+$$$ hours (till $$$5$$$ am) on the same problem, and still unable to solve was somehow fun. Here are some more solutions for the problems I managed to solve
Let's solve $$$N = 32$$$.
The idea is to first make groups of $$$2$$$ based on the first $$$4$$$ most significant bits being equal. We can query $$$i \oplus 1$$$, and now we will have $$$2$$$ people who know the special person.
After this, make groups of $$$4$$$ based on first $$$3$$$ more significant bits. Query $$$i \oplus 2$$$. If you already know the special person, indicate that with a $$$1$$$, otherwise put $$$0$$$. But, you also need to make these other $$$2$$$ people aware of the $$$0$$$-th bit as well, so this takes an extra turn.
If you solve like this, you will get a log^2 solution. But, we can parallelize the steps, and have the information passing in multiple steps at the same time to get a solution with asymptotic 2 * log. The details are left as an exercise to the reader.
For $$$n = 48$$$, make $$$3$$$ groups of $$$16$$$ at the end, and then exactly one of the groups know the special person. So, make one query to figure out which group this is.
For the type $$$1$$$ queries, let's maintain a small to large by depth. Each type $$$1$$$ query of the form $$$(X, Y)$$$, we merge depth $$$X$$$ array into $$$Y$$$ array. Type $$$2$$$ query, we create a new object and insert it in the appropriate depth.
Note that for a type $$$3$$$ query, we care about the sum of values in the set of your depth such that they lie in your subtree, i.e. a range query. So we can solve this with maintaining a structure to answer range sum quickly definitely (like implicit segment tree), but this wont fit the limits.
Instead, we can pop all the elements lying in $$$[tin_x, tout_x]$$$, find the sum naively and then only reinsert one element of the form $$$(tin_x, total)$$$. Each element only gets killed once, so this has asymptotic of $$$O((N + Q)log^2)$$$ with a fast constant factor.
Find a greedy strategy that works (and can also be optimized).
Here is one that works: at each step, take the right most occurrence of the smallest element that does not break the "prefix sum >= 0" condition. Proof is left as an exercise.
So, in essence, we will make some suffix of the "occurrences of x" as multiplied by $$$-1$$$. We can binary search this border in increasing order of $$$x$$$ and check if its valid in $$$O(NA^2 log)$$$ somewhat naively (by maintaining previous borders of $$$ \lt x$$$, and finding sums and min prefix sums of the ranges).
There are $$$2$$$ main ideas involved in the previous section to actually solve in $$$O(NA^2 log)$$$. First of all, the borders are only increasing obviously. This means that all elements $$$\le x$$$ to the right of the border must be negative. So we can find the minimum prefix sum starting from the border $$$b$$$ efficiently by precomputing prefix sum arrays treating $$$\le x$$$ as negative, and $$$ \gt x$$$ as positive. The sum before the border is trivial to compute in $$$O(NA^2 log)$$$
To optimize, the slow step is "finding sum before border", and here we again use the fact that borders are increasing. The sum looks like $$$pref_{(mid - 1, i)} - pref_{(b_i - 1, i)}$$$. We can maintan the $$$2$$$ quantities separately to get $$$O(NA log)$$$
The limits are very tight (for my solution at least), but it passed eventually
Who saw $$$Nlog^2(N)$$$ solution and seriously put $$$N = 2 \cdot 10^6$$$??? Why? Did you really need to block sqrt that much? Please set reasonable limits and not "well it runs in time, so its fine". If it lets some lesser solutions pass, that's preferable.
On C as well, I had to optimize $$$O(NA log)$$$ for more than an hour to pass. Probably missed 300 on Day $$$4$$$ due to this. I doubt intended solution is better in complexity, so maybe more lenient constraints? (or at least make subtasks with less constant optimized solutions..)
In both problems, I doubt better solutions are intended complexity wise (and even if it is, most people did not do it), so I was pretty annoyed. Nevertheless, I enjoyed the contests very much.
also when is analysis mode?
The announcement said that the analysis mode will be opened for one week, after the contest finished.
But now only Day1~Day3 opened analysis mode
Day 4 Migration can be solved in $$$O((n+q)\log n)$$$ time using segment tree merging.
If you replace Sparse Table by the Method of Four Russians, the total complexity is $$$O((n+q)A)$$$, for we just want to solve many Interval Minimum queries.
Does https://judge.yosupo.jp/submission/275418 implement method of 4 russians? I decided to copypaste the fastest submission on yosupo, did not help much
Sorry I mean at the problem Uiro, we can use 4 russians?
Yes i am saying I did use it actually (by copypasting the fastest submission from yosupo which implements it). The limits were still very tight for me. Can you provide your implementation?
Where can I find my code?
Thank you for your participation! As a tutor (and proposer of some of the problems), I'm happy that you enjoyed the contest :)
On "Rant about Limits" (for problem Migration):
Consider the optimized "naive simulation", which maintains the set of non-zero-populated vertices for each depth, and naively simulate the population movement by using level ancestor (simple binary lifting, $$$O(\log(X-Y))$$$ time per query). Surprisingly, I believe that this should achieve $$$O(n^{1.5})$$$ time! (The worst case should be that there are $$$\sqrt{N}$$$ legs of length $$$\sqrt{N}$$$ from the root.)
Anyway in the onsite editorial, an $$$O(n \log n)$$$ solution which reduces to (offline) 2D range sum queries was mentioned, and the solution similar to yours was mentioned as an "alternative solution" :)
oh wow, i didn't consider the naive simulation itself being fast enough. Yeah then the constraints makes more sense.
Yes the contest(s) was very fun. I am currently upsolving some of the problems too :) Wish there were more such contests in the year.
Where one can upsolve?
Is editorial coming soon?
Code for everything, except Exhibition3 / Fortune3 / Conference (camp25_*.cpp)
I think I can fill in Conference and Exhibition 3 soon. No idea for Fortune Telling 3 yet.
How to solve brave?
For problem "brave", two completely different approaches were mentioned in the onsite editorial, so I'll explain both of them.
A greedy algorithm works in this problem. Consider that the difficulty $$$\ell$$$ is fixed. For each moment, it is optimal to attack the enemy with the largest $$$P_i$$$ (among those who are alive). This can be simulated in $$$O(N \log N)$$$ time using priority queue. Since $$$\ell$$$ can be binary searched, we can solve each query in $$$O(N \log N \log L)$$$ time. This passes Subtask 4 (totals 24 points).
Let $$$f(\ell)$$$ be the minimum possible penalty score when difficulty is $$$\ell$$$. We want to calculate the function $$$f(\ell)$$$. Here, the graph of $$$f(\ell)$$$ is a convex polyline, and there are not so many points which the slope changes (in the onsite editorial, it is mentioned that it can be proved that there is at most $$$O(N^2)$$$ such points, and it may be actually $$$O(N)$$$).
The greedy algorithm can compute $$$f(\ell)$$$ for one $$$\ell$$$ in $$$O(N \log N)$$$ time. We make the graph of $$$f(\ell)$$$, starting from $$$\ell = 0$$$, by computing the point that the slope changes next, using binary search (which can be done by computing $$$\log L$$$ values of $$$f(\ell)$$$). If we assume that the number of such points is $$$O(N)$$$, it runs in $$$O(N^2 \log N \log L)$$$ time, which passes Subtask 5 (totals 59 points) and may pass Subtask 6 (totals 67 points), depending on the speed.
We assume that $$$P_1 \geq P_2 \geq \dots \geq P_N$$$. In order to compute the answer, it suffices to compute the following:
- $$$g_1$$$: the total time to attack enemy $$$1$$$ when acted optimally
- $$$g_2$$$: the total time to attack enemy $$$1, 2$$$ when acted optimally
- :
- $$$g_N$$$: the total time to attack enemy $$$1, 2, \dots, N$$$ when acted optimally
This is because, the total time to attack enemy $$$k$$$ is $$$g_k - g_{k-1}$$$, so the answer can be computed as $$$P_1 (g_1 - g_0) + P_2 (g_2 - g_1) + \dots + P_N (g_N - g_{N-1})$$$.Here, the total time to attack enemy $$$1, 2, \dots, k$$$ is the same as the total attacking time when only enemy $$$1, 2, \dots, k$$$ exist (and enemy $$$k+1, \dots, N$$$ does not exist). This is because, the greedy algorithm works in a way that strictly prioritizes the enemies with larger $$$P_i$$$'s. Therefore, the problem is reduced to computing the function $$$g_k(\ell)$$$, that is, $$$g_k$$$ for each difficulty $$$\ell$$$.
The function $$$g_k(\ell)$$$ can be computed by simulating the change of "the time intervals of attacks" when gradually increasing $$$\ell$$$. This can be done by priority queue which maintains "the time which two intervals collide", which works in $$$O(N \log N)$$$ time. However, it can be improved by computing from the later moment to earlier moment, then we can use stack and works in $$$O(N)$$$ time.
Overall, the optimal penalty score for $$$\ell = 0, 1, \dots, L$$$ can be computed in $$$O(N^2 + L)$$$ time, eventually using prefix sum techniques. Therefore, this problem can be solved in $$$O(N^2 + L + Q \log L)$$$ time, which gets 100 points.
Thanks a lot!
For a given set of enemies, the minimum "deficiency" (remaining HP that was not eliminated) can be computed in a Hall-theorem like manner, if you consider the problem as a matching between enemies and each time unit. You can conclude that only the "suffix" of enemies in terms of time is important. The deficiency from each suffix is a linear function — consequently, $$$g_k(l)$$$ is a maximum of linear functions, and computing its piecewise representation is called the "convex hull trick".
Anyone please help, where one can upsolve these cool problems?
https://atcoder.jp/contests/joisp2025
When will the problems be available for upsolve? Will they be available on oj.uz website ? So excited to upsolve the exciting and challenging problems.
sadly oj.uz doesn't allow new submissions anymore :(
though I guess you can submit your solutions here
Why oj.uz doesnt allow new problems?
no idea
When will the testcases be published?