Thank you to everyone who competed in mBIT 2025! It was a record breaking year for us with 84 in-person participants and 135 online participants. We hoped you guys enjoyed it as much as we did.
We tried hard to make the problems fun and interesting for as wide of a skill gap as possible. In fact, this year's competition had an in-person team of middle schoolers that was unfamiliar with for loops, and an online team with all three participants in the top 15 in the world in codeforces. We are glad to contribute to all niches of the computer science community: beginner and advanced, local and global. Thank you all for your enthusiasm.
Congratulations to the winners:
Online Advanced:
- 1st place — HoMaMaOvo — Masaki Nishimoto, Riku Kawasaki, Yui Hosaka
- 2nd place — mathforcesbox — Brian Xue, Bing-Dong Liu, Alex Chen
- 3rd place — bluepuppygreenturtle — Rain Jiang, Kai Jiang
Online Intermediate:
- 1st place — NULP Legacy — Roman Hultaichuk, Maksym Tropets, Yurii Tsymbal
In-person Intermediate:
- 1st place — A STREETCAR NAMED DESIRE — Aiden Feyerherm, Eric Xue, Kalan Warusa — McLean High School
- 2nd place — DP Demons — Taha Rawjani, Brian Tay, Arush Bodla — Academies of Loudoun High School
- 3rd place — Mt. Celeste Coding Association — William Park — Poolesvile High School
In-person Beginner (High School):
- 1st place — Pollocks Potatoes — Sahith Sammidi, Satej Gandre, Atte Metsa — Independence High School
- 2nd place — True Kings — Ali Imran, Rushil Rajkumar, Rohit Shivarajkumar — Independence High School
- 3rd Place — potatopowered — Ethan Ma, Justin Liu — McLean High School
In-person Beginner (Middle School):
- 1st place — Team Blue — Noah Gonzalez, Shiven Khurana — Roberto Clemente Middle School and Parkland Middle School
- 2nd place — Code Blooded — Hasan Rawjani, Aarush Katam, Shrihan Jadigam — Brambleton Middle School
- 3rd place — starfishes — Nathan Sanchez, Manish Manimaran, Edison Gao — Takoma Park Middle School
Fun Round:
- 1st place — DP Demons — Taha Rawjani, Brian Tay, Arush Bodla — Academies of Loudoun High School
- 2nd place — A STREETCAR NAMED DESIRE — Aiden Feyerherm, Eric Xue, Kalan Warusa — McLean High School
Below are editorials to all problems featured across all divisions, in difficulty order. Editorials to the fun round problems are at the bottom. Anyone can access and submit to all problems at our permanent gym link. We hope to see you all next year! — mBIT Team

Pythia's Message
Learn how to print strings in your chosen programming language.
To print the message, you must use your programming language's specific print statement, such as print() in python or cout << in C++. Look at your programming language's documentation to figure out how to print to standard output. You should either print the concatenation of the two strings "Thank you " and $$$s$$$, or print the two strings separately with no newline character in between.
Time Complexity: $$$O(|s|)$$$
s = input()
print("Thank you", s)
#include <bits/stdc++.h>
using namespace std;
int main(){
cin.tie(0) -> sync_with_stdio(0);
string s; cin >> s;
cout << "Thank you " << s;
}
Thoth and Counting
You will need to multiply and add. Learn about mathematical operators in your chosen language.
There are six legs for every scarab and four legs for every serpopard, making $$$6x + 4y$$$ legs in total. This quantity can calculated with * and + operations. We should use a 64-bit integer such as long long since the quantity may be big.
Time Complexity: $$$O(1)$$$
x, y = map(int, input().split())
print(6*x + 4*y)
#include <bits/stdc++.h>
using namespace std;
int main(){
cin.tie(0) -> sync_with_stdio(0);
long long x,y; cin >> x >> y;
cout << 6*x + 4*y;
}
Thor's Hammers
We want to execute print "Yes" if a certain condition is true, and "No" if the condition is not true. Learn how to write if/else statements in your chosen programming language.
To determine if the hammer can be picked up, an if-else statement can be used. A valid condition used in the if statement would check if the god was Thor or the hammer is not Mjolnir.
Time Complexity: $$$O(|g| + |h|)$$$.
g, h = input().split()
if g == "thor" or h != "mjolnir":
print("Yes")
else:
print("No")
#include <bits/stdc++.h>
using namespace std;
int main(){
cin.tie(0) -> sync_with_stdio(0);
string g, h; cin >> g >> h;
if(g == "thor" || h != "mjolnir") cout << "Yes";
else cout << "No";
}
Idea: JMK100, DanTheMan.
Preparation: JMK100
Mercury's Debt
We could use a long chain of if statements to construct Roman Numerals in general, but that's annoying. Converting between integers and roman numerals, while possible, is nontrivial. Instead, notice that the constraints on the inputs are very small.
Hardcode the conversions between integers and roman numerals. To convert from roman numerals to integers, we need a data structure where we can index with strings. For example: RomanToInt["II"] = 2.
We can use maps or hashmaps to hardcode from roman numerals to integers. Most languages will have library versions of these data structures.
Different implementations exist depending on coding knowledge. With knowledge of only variables and if statements, various if statements can be chained together to find the input variable values and then get the output value. This process can be sped up with knowledge of lists and/or maps, which allows us to directly convert from an integer to a roman numeral or vice versa. We can use them to hardcode the conversions between integers and roman numerals.
Time Complexity: $$$O(1)$$$. Technically our solution is proportional to $$$x$$$ and $$$y$$$.
RomanToInt = {"I": 1, "II": 2, "III": 3, "IV": 4, "V": 5, "VI": 6, "VII": 7, "VIII": 8, "IX": 9, "X": 10}
IntToRoman = {1: "I", 2: "II", 3: "III", 4: "IV", 5: "V", 6: "VI", 7: "VII", 8: "VIII", 9: "IX", 10: "X"}
x_rom, y_rom = input().split()
x_int = RomanToInt[x_rom]
y_int = RomanToInt[y_rom]
sum_int = x_int + y_int
sum_rom = IntToRoman[sum_int]
print(sum_rom)
#include <bits/stdc++.h>
using namespace std;
int main() {
map<string,int> RomanToInt = {
{"I", 1}, {"II", 2}, {"III", 3}, {"IV", 4}, {"V", 5},
{"VI", 6}, {"VII", 7}, {"VIII", 8}, {"IX", 9}, {"X", 10}
};
map<int,string> IntToRoman = {
{1, "I"}, {2, "II"}, {3, "III"}, {4, "IV"}, {5, "V"},
{6, "VI"}, {7, "VII"}, {8, "VIII"}, {9, "IX"}, {10, "X"},
{11, "XI"}, {12, "XII"}, {13, "XIII"}, {14, "XIV"}, {15, "XV"},
{16, "XVI"}, {17, "XVII"}, {18, "XVIII"}, {19, "XIX"}, {20, "XX"}
};
string x_rom, y_rom; cin >> x_rom >> y_rom;
int x_int = RomanToInt[x_rom];
int y_int = RomanToInt[y_rom];
int sum_int = x_int + y_int;
string sum_rom = IntToRoman[sum_int];
cout << sum_rom;
return 0;
}
Idea: JMK100, DanTheMan.
Preparation: JMK100
Tezcatlipoca's Interest
We can simulate each of the $$$n$$$ days in order. Learn about iteration and for loops in your chosen programming language.
To successfully implement this problem, we need to keep track of the amount of debt after each day passes. To do so, we use a for loop, where in every iteration we add or withdraw the specified amount and then double the debt variable.
Time Complexity: $$$O(n)$$$
n = int(input())
debt = 0
for i in range(n):
type, amt = input().split()
if type == "withdraw":
debt += int(amt)
if type == "repay":
debt -= int(amt)
debt *= 2
print(debt)
#include <bits/stdc++.h>
using namespace std;
int main() {
int n; cin >> n;
int debt = 0;
for(int i=0; i<n; i++){
string type; int amt; cin >> type >> amt;
if(type == "withdraw") debt += amt;
else debt -= amt;
debt *= 2;
}
cout << debt;
return 0;
}
Noah's Ark
If we sorted the list of animals in alphabetical order, all instances of the same animal would be next to each other.
Sort the list of animals in alphabetical order. Now, all instances of the same animal would be next to each other. We just need to verify that there is no index in our list that is different than its neighbors. We can check $$$a_i$$$ against $$$a_{i-1}$$$ and $$$a_{i+1}$$$ for all $$$i$$$. We should be careful about evaluating neighbors at the two ends of the list.
Time Complexity: $$$O(S \log n)$$$, where $$$S$$$ is the sum of the lengths of all strings.
We can create a map/dictionary from animal names to frequency counts. For each animal, we increase the value of the map/dict at the key of the animal name. Then, we can iterate through our diction to check whether all values are at least two.
Time Complexity: $$$O(S \log n)$$$, where $$$S$$$ is the sum of the lengths of all strings.
n = int(input())
a = []
for i in range(n):
animal = input()
a.append(animal)
a = sorted(a)
ans = "Yes"
for i in range(n):
if (i == 0 or a[i] != a[i-1]) and (i == n-1 or a[i] != a[i+1]):
ans = "No"
print(ans)
#include <bits/stdc++.h>
using namespace std;
int main() {
int n; cin >> n;
vector<string> a(n);
for(int i=0; i<n; i++) cin >> a[i];
sort(a.begin(), a.end());
string ans = "Yes";
for(int i=0; i<n; i++){
if((i==0 || a[i] != a[i-1]) && (i==n-1 || a[i] != a[i+1])) ans = "No";
}
cout << ans;
return 0;
}
Idea: JMK100, DanTheMan.
Preparation: JMK100
Itzamna's Bracelet
Build the bracelet bead by bead. It doesn't matter where we start since it's circular.
The next bead in our sequence must be a neighbor of the previous bead. There are only two options. How can we deduce which one comes next?
To rebuild the bracelet, we can arbitrarily start at piece one. Then, we pick one of its neighbors and place it as the next piece. Then, for each piece afterwards, it must be the piece which is the neighbor to the previous piece and is not equal to the piece two indices prior. This continues for the entire bracelet. We can figure out the neighboring beads if we maintain the pair of adjacent beads in an array.
Time Complexity: $$$O(n)$$$.
n = int(input())
a = []
for i in range(n):
x, y = map(int, input().split())
a.append((x-1,y-1))
curr = 0
prev = -1
for i in range(n):
print(curr+1, end = " ")
if prev != a[curr][0]:
prev = curr
curr = a[curr][0]
else:
prev = curr
curr = a[curr][1]
#include <bits/stdc++.h>
using namespace std;
int main() {
int n; cin >> n;
vector<pair<int,int>> p(n);
for(int i=0; i<n; i++){
cin >> p[i].first >> p[i].second;
p[i].first--;
p[i].second--;
}
int curr = 0, prev = -1;
for(int i=0; i<n; i++){
cout << curr+1 << " ";
if(p[curr].first != prev){
prev = curr;
curr = p[curr].first;
}
else{
prev = curr;
curr = p[curr].second;
}
}
return 0;
}
Vishvakarma's Floor
If the length and width are at least two, then $$$2w + 2l - 4 = x$$$ and $$$(w-2) \cdot (l-2) = y$$$ and
Iterate over $$$w$$$ and see if the $$$l$$$ that matches the first equation also matches the second equation.
Watch out for the edge case when the width or the height is one.
If we know the width and length, we could calculate the amount of red and blue tiles with the formulas $$$x = (2l + 2w - 4)$$$ and $$$y = (l - 2) \cdot (w - 2)$$$. Let's check all $$$l$$$ from $$$1$$$ to $$$x+y$$$ and see if there is a $$$w$$$ that satisfies both constraints. For the first constraint, we have $$$w = (x + 4 - 2l)/2$$$. Check if this $$$w$$$ also satisfies the second equation.
We additionally have to check the edge cases for $$$l$$$ equals 1, which can be accounted for manually ($$$y$$$ must be equal to zero in this case).
Time Complexity: $$$O(x + y)$$$
Read the previous solution. To check if $$$w$$$ and $$$l$$$ were valid, note that we used equations $$$x + y = wl$$$ and $$$y = (w - 2) \cdot (l - 2)$$$. We can solve these equations for $$$w$$$ and $$$l$$$ using algebra, and then the quadratic formula. Be careful about double precision.
Time Complexity: $$$O(1)$$$
x, y = map(int, input().split())
if y == 0:
print(x+y, 1)
else:
if x%2 == 1 or x < 4:
print(-1)
else:
for l in range(2, x//2+1):
w = (x + 4 - 2*l)//2
if((l-2) * (w-2) == y):
print(l, w)
break
if(l == x/2):
print(-1)
#include <bits/stdc++.h>
using namespace std;
int main() {
int x,y; cin >> x >> y;
if(y==0) cout << x+y << " " << 1;
else{
if(x%2 == 1 || x < 4) cout << "-1";
else{
for(int l = 2; l <= x/2; l++){
int w = (x + 4 - 2*l)/2;
if((l-2) * (w-2) == y){
cout << l << " " << w;
break;
}
if(l == x/2) cout << "-1";
}
}
}
return 0;
}
Idea: kchwoony210
Preparation: JMK100
Nuwa's Grid
If $$$nm$$$ is divisible by $$$k$$$, there's always a solution. If $$$nm$$$ isn't divisible by $$$k$$$, there's no solution.
Find a way to walk between adjacent cells on the grid where you go through each cell exactly once. Why is this helpful?
There is one condition to check if the grid can be colored in a valid arrangement. The condition is that the area of the grid must be divisible by k. If it's not, the first condition must be violated. If the area is divisible by k, then we can show that there is always a solution.
Find any continuous path on the grid that goes through every cell (in computer sciene, this is called a Hamiltonian Path). Then, color the first $$$k$$$ cells in this path color 1, the second $$$k$$$ cells in this path color 2, and so on. This way, each cell is connected to the previous cell of its color (by definition of our path), as long as its not the first cell of its color. By induction, this means each cell is connected to all cells of its color.
One convenient Hamiltonian Path is a snakelike pattern. Travel row $$$1$$$ from left to right, then go down and travel row $$$2$$$ from right to left. Then go down to row $$$3$$$ and travel from left to right, and so on.
Time Complexity: $$$O(nm)$$$
n, m, k = map(int, input().split())
if (n*m)%k != 0:
print(-1)
else:
cpc = (n*m)//k
for i in range(n):
for j in range(m):
if i%2 == 0:
print((i*m + j)//cpc + 1, end = " ")
else:
print((i*m + (m-1-j))//cpc + 1, end = " ")
print()
#include <bits/stdc++.h>
using namespace std;
int main() {
int x,y; cin >> x >> y;
if(y==0) cout << x+y << " " << 1;
else{
if(x%2 == 1 || x < 4) cout << "-1";
else{
for(int l = 2; l <= x/2; l++){
int w = (x + 4 - 2*l)/2;
if((l-2) * (w-2) == y){
cout << l << " " << w;
break;
}
if(l == x/2) cout << "-1";
}
}
}
return 0;
}
Enki's Waters
Let's say you begin flowing water at position $$$s$$$. Under what conditions on will we be able to water city $$$q$$$?
If $$$q \gt s$$$, then there can't exist $$$s \leq i \lt q$$$ such that $$$h_i \lt h_{i+1}$$$. If $$$s \lt q$$$, then there can't exist $$$q \leq i \lt s$$$ such that $$$h_i \lt h_{i-1}$$$.
If $$$q \gt s$$$ and $$$s$$$ is a valid starting point, then $$$s+1 \leq q$$$ is also a valid starting point. Similarly, if $$$q \lt s$$$ and $$$s$$$ is a valid starting point, then $$$s-1 \geq q$$$ is also a valid starting point.
Since water flows downwards, the valid starting locations of the water are where there is a downward path towards the city. In other words, there is a non-decreasing path from the city to the starting location. Using this alternate definition, we can find the largest non-decreasing path on the left side and right side of the city, getting a range of possible starting locations for the water. We then find the amount of starting locations in this range and print it.
Time Complexity: $$$O(n)$$$
n, q = map(int, input().split())
q -= 1
a = list(map(int, input().split()))
ans = 1
for i in range(q+1, n):
if a[i] < a[i-1]:
break
ans += 1
for i in range(q-1, -1, -1):
if a[i] < a[i+1]:
break
ans += 1
print(ans)
#include <bits/stdc++.h>
using namespace std;
int main() {
int n, q; cin >> n >> q; q--;
vector<int> a(n);
for(int i=0; i<n; i++) cin >> a[i];
int ans = 1;
for(int i=q+1; i<n; i++){
if(a[i] < a[i-1]) break;
ans++;
}
for(int i=q-1; i>=0; i--){
if(a[i] < a[i+1]) break;
ans++;
}
cout << ans;
return 0;
}
Gwydion's Grid
Try a 4 by 4 grid. Which patterns can generalize to larger grids?
Alternate symbols every row and every other column.
There are multiple valid constructions but a simple one is to use the following pattern, which avoids three in a row in every direction.
0011
1100
We repeat this grid to fill the entire grid, even if the pattern gets cut off on the edges. For example, if $$$n=5$$$ and $$$m=7$$$, we would print:
0011001
1100110
0011001
1100110
0011001
Because the symbol flips every row and every two columns, the pattern can be implemented where the bit in position (i,j) is equal to $$$(i+\left\lfloor\frac{j}{2}\right\rfloor) \bmod 2$$$.
test = int(input())
for tc in range(test):
n, m = map(int, input().split())
for i in range(n):
for j in range(m):
print((i+j//2)%2, end = "")
print("")
#include<bits/stdc++.h>
using namespace std;
int main(){
cin.tie(0)->sync_with_stdio(0);
int test;
cin >> test;
while(test--){
int n,m;
cin >> n >> m;
for(int i = 0; i < n; i++){
for(int j = 0; j < m; j++){
cout << (i+j/2)%2;
}
cout << "\n";
}
}
}
Idea: DanTheMan.
Preparation: DanTheMan.
Anansi's Game
If there are multiple $$$i$$$ such that $$$a_i = i$$$, then which one should we choose?
Process the array backward
If at any point, there are multiple piles that we could choose, it is always best to greedily choose the smallest one. If we chose a larger pile, then a seed would be added to all smaller piles. At least one additional pile would exceed its pile number, so we wouldn’t be able to select that pile thereafter. Therefore, always choosing the smallest pile maximally preserves our options.
Now, how can we quickly compute the number of operations we can achieve following this strategy? Directly simulating the moves is too slow because the answer may be large.
Instead, we can work backwards. Let $$$i$$$ be the index of our current pile. As we iterate from $$$i=n$$$ to $$$i=1$$$, we will keep track of the total number of times that piles to the right of pile $$$i$$$ will get selected. Then, we add this number to the number of seeds initially in pile $$$i$$$, which gives us the total number of distinct seeds that pile $$$i$$$ will hold over the entire duration of the game. This number, divided by $$$i$$$ rounded down, is the number of times we can select pile $$$i$$$ over the entire duration of the game. Of course, if pile $$$i$$$ starts with more than $$$i$$$ seeds, then it can never be selected.
Our final answer is the total number of times a pile will be selected after iterating through the entire array.
test = int(input())
for tc in range(test):
n = int(input())
a = map(int, input().split())
ans = 0
for i in range(n-1, -1, -1):
if(a[i] <= i+1):
ans += (a[i]+ans)//(i+1)
print(ans)
#include<bits/stdc++.h>
using namespace std;
int main(){
cin.tie(0)->sync_with_stdio(0);
int test; cin >> test;
while(test--){
int n; cin >> n;
vector<int> a(n);
for(int i=0; i<n; i++) cin >> a[i];
long long ans = 0;
for(int i=n-1; i>=0; i--){
if(a[i]<=i+1){
ans += (a[i]+ans)/(i+1);
}
}
cout << ans << '\n';
}
return 0;
}
Idea: DanTheMan.
Preparation: DanTheMan.
Hephaestus' Helpers
It is always best to wait as long as possible before using a Golden Maiden, and to give a Golden Maiden all broken tools.
Use two pointers. The first pointer loops through tools as Hephaestus uses them. The second pointer stores which prefix of tool uses have been repaired.
The optimal way to assign tools to maidens is with greedy. When Hephaestus uses a GM, he should give them all damaged tools. Additionally, Hephaestus should always wait as long as possible before giving tools to a GM.
So, our procedure is as follows: if there is a damaged tool that we need to use in exactly s seconds, then use a GM and repair all damaged tools. Otherwise, we can wait.
We iterate through the tools, and mark tools as damaged as they come up. When we encounter an already-damaged tool, we will go back in time by s seconds to use a GM, and mark all tools used before this time as repaired. We store a pointer to the last repaired tool use, and increment the pointer until it comes after the current GM (which has time s seconds before time of current tool). If the current tool is not repaired in this process, then it is impossible.
test = int(input())
for tc in range(test):
n, s = map(int, input().split())
a = []
for i in range(n):
tool, time = map(int, input().split())
tool -= 1
a.append((tool, time))
p, used = 0, 0
clean = [True] * n
impossible = False
for (tool, time) in a:
if not(clean[tool]):
used += 1
while p<n and a[p][1] <= time - s:
clean[a[p][0]] = True
p += 1
if not(clean[tool]):
impossible = True
clean[tool] = False
if impossible:
print(-1)
else:
print(used)
#include<bits/stdc++.h>
using namespace std;
int main(){
cin.tie(0)->sync_with_stdio(0);
int test; cin >> test;
while(test--){
int n, s; cin >> n >> s;
vector<pair<int,int>> a(n);
for(int i=0; i<n; i++){
cin >> a[i].first >> a[i].second;
a[i].first--;
}
int p = 0, used = 0;
bool impossible = false;
vector<bool> clean(n, true);
for(int i=0; i<n; i++){
if(!clean[a[i].first]){
used++;
while(p<n && a[p].second <= a[i].second-s){
clean[a[p].first] = true;
p++;
}
}
if(!clean[a[i].first]) impossible = true;
clean[a[i].first] = false;
}
if(impossible) cout << "-1\n";
else cout << used << "\n";
}
return 0;
}
Idea: DanTheMan.
Preparation: DanTheMan.
Domovoy's Tiles
Fix the first element of the subsequence. Then, how can you construct the longest valid subsequence?
An array element $$$a_i$$$ can only be found in $$$d(a_i)$$$ different subsequences where $$$d(x)$$$ is the number of divisors of $$$x$$$.
Let's fix the starting value $$$v$$$ of the subsequence, and then greedily construct the rest of the subsequence. First, we add an element of value $$$v$$$, then of value $$$2v$$$, then of value $$$3v$$$, and so on until there is no element that we can add. When adding an element of a given value, we greedily select the leftmost index of the value that comes after the last element of the subsequence. This index can be found with binary search. We must preprocess by creating a list of indices for each array value. These can be stored in a map/dict with keys equal to the array values.
Now, we just need to check every value of v and the answer is the longest subsequence attained out of all of possible $$$v$$$’s. Since $$$v$$$ itself must be an element of $$$a$$$, there are $$$n$$$ options.
At first, this algorithm may seem slow. However, notice that for each index $$$i$$$, the index can only ever be added to the subsequence for values of $$$v$$$ that are a factor of $$$a_i$$$. Therefore, $$$i$$$ can be “found” by the algorithm at most $$$d(a[i])$$$ times. Therefore, the total algorithm amortizes to $$$O(n \cdot log(n) \cdot \max_i{(d(a_i))})$$$
test = int(input())
for tc in range(test):
n = int(input())
a = list(map(int, input().split()))
occs = {}
done = {}
for i in range(n):
if a[i] not in occs:
occs[a[i]] = []
done[a[i]] = False
occs[a[i]].append(i)
ans = 0
for i in range(n):
if done[a[i]]:
continue
done[a[i]] = True
tot, ind = 1, i
while a[i]*(tot+1) in occs:
l, r = -1, len(occs[a[i]*(tot+1)])
while l+1 < r:
m = (l+r)//2
if occs[a[i]*(tot+1)][m] > ind:
r = m
else:
l = m
if(r == len(occs[a[i]*(tot+1)])):
break
ind = occs[a[i]*(tot+1)][r]
tot += 1
ans = max(ans, tot)
print(ans)
#include<bits/stdc++.h>
using namespace std;
int main(){
cin.tie(0)->sync_with_stdio(0);
int test; cin >> test;
while(test--){
int n; cin >> n;
vector<int> a(n);
for(int i=0; i<n; i++) cin >> a[i];
map<int, vector<int>> occs;
for(int i=0; i<n; i++) occs[a[i]].push_back(i);
map<int, bool> done;
int ans = 0;
for(int i=0; i<n; i++){
if(done[a[i]]) continue;
done[a[i]] = true;
int tot = 1, ind = i;
while(lower_bound(occs[a[i]*(tot+1)].begin(), occs[a[i]*(tot+1)].end(), ind) != occs[a[i]*(tot+1)].end()){
ind = *lower_bound(occs[a[i]*(tot+1)].begin(), occs[a[i]*(tot+1)].end(), ind);
tot ++;
}
ans = max(ans, tot);
}
cout << ans << "\n";
}
return 0;
}
Idea: [user:jmk100.,2025-06-02]
Preparation: [user:jmk100.,2025-06-02]
Azhdaha's Adventure
Imagine holding up the tree by the two heads of the caterpillar, keeping all vertices in between taught, and letting all of the other vertices hang below.
When Azhdaha gets one end of his body to a vertex on the line, then it is always possible for Azhdaha can get to any vertex on the subtree below. Now, just find which vertices on the line that Azhdaha can get to.
Getting a head to a root on the line allows the other head to get to roots on the other side of the line. Those other roots allow the first head to get to more roots and so on.
Hold the tree sideways with Azhdaha's body along the top and all other nodes hanging down. Azhdaha's body is a path, and each vertex along the path has a subtree beneath it. Call the vertices on the path roots. We will label these root vertices $$$1$$$, …, $$$m$$$ from left to right where root $$$1$$$ is vertex $$$u$$$ and root $$$m$$$ is vertex $$$v$$$.
Observe that we can reach a vertex $$$x$$$ if and only if we can get to its root $$$r$$$. This is because once we get a head to $$$r$$$, we can move that head straight down $$$x$$$, since no vertices along the path from $$$r$$$ to $$$x$$$ will ever overlap with the caterpillar. Additionally, all paths to $$$x$$$ require going through $$$r$$$. Now we have reduced the problem of finding which root vertices we can get to.
Find the depth and size of each subtree. Suppose you can get the left head to some root $$$i$$$ with depth $$$d$$$. Then, you can move the left head downwards $$$d$$$ times, which moves the right head to the left $$$d$$$ times. This makes all roots up to $$$d-(i-1)$$$ from the right reachable to the right head. Similarly, if you can get the right head to this root vertex, it makes all root vertices $$$d-(n-i)$$$ vertices from the left reachable to the left head. This tells us how getting one head to a root opens up roots for the other head.
By a process of back and forth “wiggling” the left and right heads, we can see how far right our left head can get to and how far left our right head can get to. Call the potential of a head the furthest to the middle that the head can get to when traveling down from one of the roots within the other head's bound. Each bound and each potential can be stored by a pointer. When head A's bound is further away from the middle than its potential, we can extend head A's bound and update head B's potential to the max of its previous value and what can be attained from head A's new bound.
Once the bounds are equal to the potentials, we have the exact set of root vertices we can get to. Then, we add up their subtree sizes to get the total number of reachable vertices.
#include<bits/stdc++.h>
using namespace std;
pair<int,int> solvea(vector<int> a){
int n = (int)a.size();
int p1 = 0, p2 = n-1;
int l = a[p2]-1, r = a[p1]-1;
while((p1 < l && p1<n-1) || (p2 > n-1-r && p2>0)){
if(p1 < l && p1 < n-1){
p1++;
r = max(r, a[p1]-1-p1);
}
if(p2 > n-1-r && p2 > 0){
p2--;
l = max(l, a[p2]-1-(n-1-p2));
}
}
return make_pair(p1, p2);
}
int main(){
cin.tie(0)->sync_with_stdio(0);
int test; cin >> test;
while(test--){
int n; cin >> n;
int u,v; cin >> u >> v; u--; v--;
vector<vector<int>> adj(n);
for(int i=1; i<=n-1; i++){
int x,y; cin >> x >> y;
adj[x-1].push_back(y-1);
adj[y-1].push_back(x-1);
}
// get path between u and v
vector<int> path;
function<bool(int,int)> getpath = [&](int node, int par){
if(node == v){
path.push_back(node);
return true;
}
for(int c: adj[node]){
if(c != par && getpath(c,node)){
path.push_back(node);
return true;
}
}
return false;
}; getpath(u, u);
vector<bool>inpath(n);
for(int c: path) inpath[c] = true;
// get heights and sizes of all nodes on path
vector<int> height(n), sizes(n);
function<void(int,int)> dfs = [&](int node, int par){
for(int c: adj[node]){
if(c != par && !inpath[c]){
dfs(c, node);
height[node] = max(height[node], height[c]);
sizes[node] += sizes[c];
}
}
height[node] += 1;
sizes[node] += 1;
};
for(int c: path) dfs(c,c);
// reduce problem to array of heights on path
int m = (int)path.size();
vector<int> a;
for(int c: path) a.push_back(height[c]);
pair<int,int> ends = solvea(a);
int ans = 0;
for(int i=0; i<m; i++){
if(i<=ends.first || i>=ends.second) ans += sizes[path[i]];
}
cout << ans << "\n";
}
return 0;
}
Idea: DanTheMan.
Preparation: DanTheMan.
Xochipilli's Dance
When you are in the same position as Xochipilli, it is always optimal to hold the position at least until the move ends.
Let $$$dp_i$$$ be the maximum amount of time you can match with Xochipilli when you are at the start of move $$$i$$$ and in position $$$p_i$$$.
Which cases for transitions can be condensed? You should end up with three general cases.
Let $$$dp_i$$$ be the maximum amount of time you can match with Xochipilli when you are at the start of move $$$i$$$ and in position $$$p_i$$$. Greedily, it is always optimal to stay in position $$$p_i$$$ for the entire duration of $$$t_i$$$ seconds. This is because switching early costs us one point per second early we switch, and these extra seconds can not make up more than one point per second.
So after $$$t_i$$$ seconds (call this time $$$T$$$), we have to decide which position to transition to. Let $$$j$$$ be the index of the move that is taking place at time $$$T + k$$$.
After $$$T$$$, we could transition directly to the position $$$p_j$$$. In this case, it is the same situation as the definition of $$$dp_j$$$, but we are starting somewhere in the middle of the move so we lose out on some of the synchronization at the beginning of the move. In this case we get $$$dp_j -$$$ (time between $$$T + k$$$ and start of move $$$j$$$) points.
Another option is to go to some move after $$$j$$$. This gives us $$$\max{dp[j’]}$$$ points for all $$$j’ \gt j$$$. We can compute this with suffix max, or since $$$j$$$ is monotonic on $$$i$$$, we can store this max as a variable and update it as we move $$$i$$$ from right to left.
A final transition option is to not change moves at all, and stay in position $$$p_i$$$. The next time we are synchronized with Xochipilli is when position $$$a_i$$$ returns. This gives us $$$dp_{last_i}$$$ points where $$$last_i$$$ is the leftmost index $$$x \gt i$$$ such that $$$p_x = p_i$$$.
So, the dp transition takes the maximum out of these three options. We compute dp values from right to left. We can maintain j using two pointers and update the last array as we iterate.
#include<bits/stdc++.h>
using namespace std;
int main(){
cin.tie(0)->sync_with_stdio(0);
int test; cin >> test;
for(int tc=0; tc<test; tc++){
int n,k; cin >> n >> k;
vector<pair<int,int>> a(n);
for(int i=0; i<n; i++){
cin >> a[i].first >> a[i].second;
a[i].first--;
}
vector<long long> ends(n);
for(int i=0; i<n; i++){
if(i>0) ends[i] = ends[i-1];
ends[i] += a[i].second;
}
vector<long long> dp(n), maxdp(n), last(n,-1);
int pt = n;
for(int i=n-1; i>=0; i--){
while(ends[i]+k <= ends[pt-1]) pt--;
if(last[a[i].first] != -1) dp[i] = max(dp[i], dp[last[a[i].first]]);
if(pt < n) dp[i] = max(dp[i], dp[pt] - (ends[i]+k - ends[pt-1]));
if(pt < n-1) dp[i] = max(dp[i], maxdp[pt+1]);
dp[i] += a[i].second;
maxdp[i] = dp[i];
if(i < n-1) maxdp[i] = max(maxdp[i], maxdp[i+1]);
last[a[i].first] = i;
}
long long ans = 0;
for(int i=0; i<n; i++) ans = max(ans, dp[i]);
cout << ans << "\n";
}
return 0;
}
Idea: DanTheMan.
Preparation: DanTheMan.
Nian's Fear
Process objects in decreasing order of y-coordinate
Call a set of banners a group if they all share a y-coordinate and a wall or rocket to the left and a wall or rocket to the right.
A group of banners blocks out contiguous ranges of x-coordinates below. What data structure can support range updates?
For a group of banners, move all banners as left as possible, and then go from right to left and move each one as far right as possible. Look at the gaps formed by this process. All x-coordinates not part of a gap are blocked out.
Whether a firecracker can be launched will never be affected by the presence of firecrackers at a lower height. Thus, by having firecrackers launch from top to bottom, we allow each firecracker to have their best chance of launch. We will do a line sweep downwards, and will evaluate rockets and firecrackers together in one event queue. This way, each object will have the most mobility possible.
The banners block any firecracker from launching on some portions of the number line. We will represent whether each x-coordinate is blocked from launch with a segment tree (or a fenwick tree) on all coordinates from $$$0$$$ to $$$w$$$. Initially, all segments are unblocked, so the data structure contains all zeros. When a segment gets blocked, we do a range assignment (or addition) on that segment. When we get to a rocket, we do a point query on its coordinate. If the point evaluates to zero, then the rocket can take off. Otherwise, the rocket is stuck, and serves as a barrier for banners below it.
Now, how do we determine which segments that the a group of banners block? We group the banners based on the pair of rockets/walls that they are in between. We evaluate each group independently, since they can never cross or bump into each other. For a given group, we start with all banners as far right as possible, then for each banner from left to right, we move that banner as far left as possible (imagine incrementing a foosball score tracker). The gaps in the banners at each of these positions are the portions of the number line that do not get blocked. The segments between each consecutive gap are the segments that get blocked out.
#include<bits/stdc++.h>
using namespace std;
struct FenwickTree{
int n;
vector<int> f;
void init(int n_){
n = n_;
f.resize(n);
}
void add(int i, int v){
for(; i<n; i=i|(i+1)) f[i] += v;
}
int sum(int i){
int ans = 0;
for(; i>=0; i=(i&(i+1))-1) ans += f[i];
return ans;
}
};
int main(){
cin.tie(0)->sync_with_stdio(0);
int test; cin >> test;
while(test--){
int n,m,x; cin >> n >> m >> x;
vector<pair<int,int>> rockets(n);
for(int i=0; i<n; i++) cin >> rockets[i].first >> rockets[i].second;
vector<pair<pair<int,int>, int>> banners(m);
for(int i=0; i<m; i++) cin >> banners[i].first.first >> banners[i].first.second >> banners[i].second;
// line sweep
vector<pair<int,pair<int,int>>> events;
for(int i=0; i<n; i++) events.push_back(make_pair(rockets[i].second, make_pair(1, i)));
for(int i=0; i<m; i++) events.push_back(make_pair(banners[i].second, make_pair(0, i)));
sort(events.begin(), events.end(), [&](auto x, auto y){return x>y;});
FenwickTree ft; ft.init(x+1);
vector<bool> ans(n);
set<int> s;
s.insert(0); s.insert(x);
int last_height = -1;
vector<int> pending;
for(pair<int,pair<int,int>> event: events){
int height = event.first, type = event.second.first, index = event.second.second;
if(height != last_height){
unordered_map<int,vector<pair<int,int>>> groups;
for(int c: pending){
groups[*(s.lower_bound(banners[c].first.first))].push_back(banners[c].first);
}
for(pair<int, vector<pair<int,int>>> group: groups){
sort(group.second.begin(), group.second.end());
int rightwall = group.first;
int leftwall = *(--s.find(rightwall));
int total_length = 0;
for(pair<int,int> ban: group.second) total_length += ban.second-ban.first+1;
int gapsize = rightwall-leftwall-1-total_length;
if(gapsize == 0){
ft.add(leftwall+1, 1);
ft.add(rightwall, -1);
continue;
}
pair<int,int> gap = make_pair(leftwall+1, leftwall+gapsize);
for(pair<int,int> ban: group.second){
pair<int,int> newgap = make_pair(gap.first+ ban.second-ban.first+1, gap.second + ban.second-ban.first+1);
if(gap.second+1 <= newgap.first-1){
ft.add(gap.second+1, 1);
ft.add(newgap.first, -1);
}
gap = newgap;
}
}
pending.clear();
}
if(type == 1){ // rocket
if(ft.sum(rockets[index].first) > 0){
ans[index] = false;
s.insert(rockets[index].first);
}
else ans[index] = true;
}
if(type == 0){ // banner
pending.push_back(index);
}
last_height = height;
}
for(int i=0; i<n; i++) cout << ans[i];
cout << "\n";
}
return 0;
}
Idea: DanTheMan.
Preparation: DanTheMan.
Kāne's Tree
Call an operation of the second type a sprouting and an operation of the first type a manual. Let $$$dp_{i, j}$$$ be the minimum number of manuals to construct the subtree at node $$$i$$$ when using exactly $$$j$$$ sproutings.
Assign sprouting counts of $$$0$$$, $$$1$$$, ... $$$j-1$$$ to $$$j$$$ different children. The other children can get any sprouting count from $$$0$$$ to $$$t$$$.
Use bitmask DP to make these assignments.
Define the type of a subtree to be the maximum number of sproutings that could possibly be applied to the subtree. How many subtrees of each type are there at most? How does this bound the total time complexity?
Call a type-2 operation a sprouting and a type-1 operation a manual. Define the type of a subtree to be the maximum number of sproutings that could possibly be applied to $$$i$$$'s subtree.
Let $$$dp_{i,j}$$$ = the minimum number of manuals to construct $$$i$$$’s subtree when using exactly $$$j$$$ sproutings. We will inductively assume we have computed dp values for all nodes in $$$i$$$’s subtree. Additionally, we can assume $$$j \leq k$$$ where $$$k$$$ is the type of subtree $$$i$$$.
Call $$$node_1$$$, $$$node_2$$$, … $$$node_j$$$ the nodes that come directly out of $$$i$$$ after the first, second, … $$$j$$$th sprouting. The subtree of $$$node_x$$$ will experience $$$j-x$$$ sproutings because out of the $$$j$$$ sproutings that were prescribed to subtree $$$i$$$, $$$x$$$ of them were already used before $$$node_x$$$ was created. The minimum cost to construct $$$node_x$$$’s subtree is now $$$dp_{node_x,j-x}$$$.
However, node $$$i$$$ may have more children in addition to $$$node_1$$$, … $$$node_j$$$. These other children must be created manually. For each of these children $$$c$$$, you must decide how many sproutings it will experience. The count can be anything between $$$0$$$ and $$$j$$$. If you want subtree $$$c$$$ to experience $$$x$$$ sproutings, you would simply construct it after calling the $$$(j-x)$$$ th sprouting. So, the minimum cost to construct a child c that is created manually is $$$\min{(dp_{c,x})}$$$ for all $$$0 \leq x \leq j$$$.
Now the problem is, given dp values of all children, assign sprouting counts between $$$0$$$ and $$$j$$$ to each of them such that for each sprouting count, at least one child is assigned this count. We want to do this such that the sum of $$$dp_{c,assigned_c}$$$ over all children c is minimized.
We will solve this subproblem with a bitmask dp. Let $$$pd_{i,mask}$$$ be the minimum cost to assign the first $$$i$$$ children, and the sprouting assignment counts contained in the bits of $$$m$$$ have been used. We then have $$$pd_{i,m} = \min{(pd_{i-1,m} + dp_{c,b}, pd_{i-1,m-2^b} + dp_{c,b})}$$$ for all $$$b$$$ in mask. Now, $$$dp_{node,j} = \min{(pd_{c, 2^j-1}, pd_{c, 2^{j+1}-1})} + c-j$$$ where $$$c$$$ is the number of children. The two cases are because having a child with sprouting count $$$j$$$ is optinal and the $$$c-j$$$ term is because out of the $$$c$$$ children, $$$j$$$ where created from sproutings so $$$c-j$$$ must be created manually.
For a type-k tree we can find $$$dp_{i,j}$$$ for all $$$j \leq k$$$ in $$$O(2^k)$$$ time. Since $$$k$$$ can go up to $$$\log{n}$$$, this seems like $$$O(n)$$$ time to compute each subtree, so $$$O(n^2)$$$ in total. However, we can bound the number of type-k trees by $$$O(\frac{n}{2^k})$$$ (proof in next paragraph. So the time complexity to compute all type-k subtree is $$$O(\frac{n}{2^k}) * 2^k = O(n)$$$. $$$k$$$ can go up to $$$\log{n}$$$ so the total time complexity is $$$O(n \cdot \log{n})$$$. The answer is $$$\min{(dp_{1,j} + j)}$$$ for all $$$j$$$.
Proof of bound of number of type-k subtrees: a subtree of type $$$k$$$ needs a child of type at least zero, a (different) child of type at least one, a (different) child of at least two, ..., a (different) child of type at least $$$k-1$$$. Call these the subtrees requirements. Suppose there are $$$m$$$ type-$$$k$$$ subtrees. Each type $$$k$$$ subtree can "go toward" filling out the requirement for at most one other type $$$k$$$ subtree (the closest type $$$k$$$ ancestor). At best case, this can cross out $$$m$$$ different type $$$k-1$$$-child requirements. However, the rest of the tree must be disjoint. This requires adding at least $$$m*2^{k-1}$$$ more nodes to fill the requirements of all type-$$$k$$$ subtrees. Therefore $$$n$$$ is $$$O(m*2^k)$$$ so $$$m$$$ is $$$O(\frac{n}{2^k})$$$.
#include<bits/stdc++.h>
using namespace std;
int main(){
cin.tie(0)->sync_with_stdio(0);
int test; cin >> test;
while(test--){
int n; cin >> n;
vector<int> par(n);
vector<vector<int>> childs(n);
for(int i=1; i<=n-1; i++){
cin >> par[i]; par[i]--;
childs[par[i]].push_back(i);
}
int log = 1;
while((1<<log) < n) log++;
vector<vector<int>> dp(n);
vector<int> type(n);
function<void(int)> dfs = [&](int node){
if(childs[node].empty()){
type[node] = 0;
dp[node] = {0};
return;
}
// calculate type of node
vector<int> counts(log);
for(int c: childs[node]){
dfs(c);
counts[type[c]]++;
}
int curr = type[node] = (int)childs[node].size();
for(int i=0; i<log; i++){
type[node] = min(type[node], curr+i);
curr -= counts[i];
}
// compute pd
vector<int> pd(1<<(type[node]+1), n);
pd[0] = 0;
for(int c: childs[node]){
vector<int> newpd(1<<(type[node]+1), n);
for(int mask=0; mask<(1<<(type[node]+1)); mask++){
for(int j=0; j<=min(type[node], type[c]); j++){
if((mask>>j)&1){
newpd[mask] = min(newpd[mask], pd[mask] + dp[c][j]);
newpd[mask] = min(newpd[mask], pd[mask-(1<<j)] + dp[c][j]);
}
}
}
pd = newpd;
}
// compute dp
for(int j=0; j<=type[node]; j++){
dp[node].push_back(min(pd[(1<<j)-1], pd[(1<<(j+1))-1]) + (int)childs[node].size() - j);
}
};
dfs(0);
int ans = n;
for(int j=0; j<=type[0]; j++) ans = min(ans, dp[0][j] + j);
cout << ans << "\n";
}
return 0;
}
The tree you get after applying k sproutings represents the von neumann construction of the naturals. This also shows that if you have a type-x tree and replace each node with a type-y tree you get a type-(x+y) tree.
Idea: DanTheMan.
Preparation: DanTheMan.
Hapi’s River
It is never optimal to have an edge between two nodes of the same river.
Adding the edge splits the graph into two independent graphs where each is composed of two paths. How can we solve for the equilibrium cost of one part?
Let $$$\text{best}(i)$$$ be the best city on the Blue Nile to put an edge starting from city $$$i$$$ on the White Nile.
$$$\text{best}(i)$$$ is monotonic on $$$i$$$.
First, the added edge may as well be undirected, because it is impossible to have the edge be traversed both ways. Second, the added edge should always go between cities on different rivers, since connecting two cities on the same river shortens the distance between Khartoum and Juba in all cases.
Adding an edge splits the graph into a left section and a right section where each section has two parallel paths. Each section is independent because travelers can take either parallel path for the left section, and then optionally take the edge to go on either parallel path for the right section. The total equilibrium distance is the sum of the equilibrium distances of each section. Call the equilibrium cost of having the edge go between city $$$i$$$ on the White Nile and city $$$j$$$ on the Blue Nile $$$\text{cost}(i,j)$$$.
Given a section with two parallel paths, at equilibrium the cost of traveling on either path should be the same, or all travelers should go on one path. If their weight was not equal to each other, some traffic would move from the more expensive path to the cheaper path so it wouldn't be at equilibrium. So to find the equilibrium cost, we should look at the total linear function given by the sums of the edges on each of the paths, set their weight to be equal and solve. The only edge case is if all of the flow is on one side, which happens if the constant term on one path is greater than the sum of the constant and linear coefficient of the other side.
So to find the equilibrium cost for a given split, use prefix sums on the linear and constant coefficients to get the total linear equations for each of the four parts. Then, we can find the cost on each side with a formula and return the sum. This gives us an $$$O(1)$$$ way to find the equilibrium cost of the entire graph, given an edge. Call the equilibrium cost of the edge go between city $$$i$$$ on the White Nile and city $$$j$$$ on the Blue Nile $$$\text{cost}(i,j)$$$.
Let’s fix the location of the split on the top row and solve for the best location of the split on the bottom row. Let $$$\text{best}(i)$$$ be the j such that the cost when making an edge between $$$i$$$ and $$$j$$$ is maximized. Theorem: $$$\text{best}(i)$$$ is monotonically non-decreasing meaning that $$$\text{best}(i) \leq \text{best}(j)$$$ for all $$$i \leq j$$$. The proof for this is at the end of this editorial.
Now, we want to solve for $$$\text{best}(i)$$$ for all $$$i$$$ quickly. Then, our answer will just be the maximum cost of making the edge $$$(i, \text{best}(i))$$$ over all $$$i$$$. We will use divide and conquer. Suppose we have a section of the White Nile $$$[al,ar]$$$ and a section of the blue Nile $$$[bl, br]$$$ and for all $$$i$$$ in $$$[al,ar]$$$, we know that $$$\text{best}(i)$$$ is in $$$[bl,br]$$$. We will devise an algorithm $$$\text{solve}(al, ar, bl, br)$$$ that finds $$$\text{best}(i)$$$ for all $$$i$$$ in $$$[al,ar]$$$ in this case.
Our base case is if $$$bl=br$$$, then we know that for all $$$i$$$ in $$$[al,ar]$$$, $$$\text{best}(i) = bl$$$. Otherwise, let $$$am = \left\lfloor (al+ar)/2 \right\rfloor$$$. First, find $$$\text{best}(am)$$$ with brute force in $$$O(br-bl)$$$ time by finding the maximum cost of the edge $$$(i,j)$$$ for all $$$j$$$ in $$$[bl, br]$$$. After finding $$$\text{best}(am)$$$, our monotonicity property means that $$$\text{best}(i)$$$ is in $$$[bl, \text{best}(am)]$$$ for all $$$i$$$ in $$$[al, am-1]$$$ and $$$\text{best}(i)$$$ is in $$$[\text{best}(am), br]$$$ for all $$$i$$$ in $$$[am+1, ar]$$$. These are conditions that exactly match the form of our solve function. Our final step is to recursively call $$$\text{solve}(al, am-1, bm, \text{best}(am))$$$ and $$$\text{solve}(am+1, ar, \text{best}(am) ar)$$$.
What is the time complexity of $$$\text{solve}(1, n, 1, m)$$$? Look at the recursion tree. The depth of the tree is $$$O(\log{n})$$$ since between each row of the tree, $$$ar-al$$$ at each node halves. Additionally, each row of the tree takes $$$O(n)$$$ time to find each $$$\text{best}(am)$$$ since all of the $$$[bl, br]$$$ are disjoint so the brute force goes through each $$$j$$$ at most once. This means that the total algorithm runs in $$$O(n \cdot \log{n})$$$ time.
Proof of monotonicity of $$$\text{best}(i)$$$.
We just have to show the Monge inequality on the cost function: $$$\text{cost}(x_1, y_2) + \text{cost}(x_2,y_1) \leq \text{cost}(x_1, y_1) + \text{cost}(x_2, y_2)$$$ for all $$$x_1 \leq x_2$$$ and $$$y_1 \leq y_2$$$ . Split each edge with cost $$$ax + b$$$ into $$$a$$$ edges of cost $$$x$$$ and $$$b$$$ edges of cost $$$1$$$ in series to form an augmented graph. Now, if the Monge inequality is true for all edges on the augmented graph, then it is also true for all edges in the original graph, because the edges in the original graph are a subset of the edges on the augmented graph, so the inequalities are also a subset. To show the Monge inequality on the augmented graph, we just need to show the 2 by 2 Monge inequality: $$$\text{cost}(x,y+1) + \text{cost}(x+1,y) \leq \text{cost}(x,y) + \text{cost}(x+1,y+1)$$$.
To show this, we can break this into four different cases depending on whether the edges $$$(x,x+1)$$$ and $$$(y,y+1)$$$ are each of cost $$$1$$$ or of cost $$$x$$$. In each case, we can do a bit of algebra using the formula for $$$\text{cost}$$$ to show that the equality holds. The hardest case is when one of the edges is of cost $$$1$$$ and the other is of cost $$$x$$$. Details left to the reader XD.
#include<bits/stdc++.h>
using namespace std;
#define F first
#define S second
typedef long long ll;
typedef long double ld;
ld single(pair<ld, ld> x, pair<ld, ld> y){
ld a = x.F, b = x.S, c = y.F, d = y.S;
if(a+b <= d) return a+b;
if(c+d <= b) return c+d;
return a*(c+d-b)/(a+c)+b;
};
int main(){
cin.tie(0)->sync_with_stdio(0);
int test; cin >> test;
for(int tc=0; tc<test; tc++){
// read input
int n,m; cin >> n >> m;
vector<pair<ll , ll>> a(n), b(m);
for(int i=0; i<n; i++) cin >> a[i].F;
for(int i=0; i<n; i++) cin >> a[i].S;
for(int i=0; i<m; i++) cin >> b[i].F;
for(int i=0; i<m; i++) cin >> b[i].S;
// build prefix sums on each row
vector<pair<ll, ll>> prefa = a, prefb = b;
for(int i=1; i<n; i++){
prefa[i].F += prefa[i-1].F;
prefa[i].S += prefa[i-1].S;
}
for(int i=1; i<m; i++){
prefb[i].F += prefb[i-1].F;
prefb[i].S += prefb[i-1].S;
}
// shortest path if break is at l1 and l2
function<ld(int,int)>eval = [&](int l1, int l2){
ld left = single(prefa[l1], prefb[l2]);
ld right = single(
make_pair(prefa[n-1].F - prefa[l1].F, prefa[n-1].S - prefa[l1].S),
make_pair(prefb[m-1].F - prefb[l2].F, prefb[m-1].S - prefb[l2].S));
return left + right;
};
ld ans = 0;
function<void(int,int,int,int)>consider = [&](int al, int ar, int bl, int br){
// considers the best matchings for [al, ar] when you know only [bl, br] is in contention
if(bl == br){
for(int i=al; i<=ar; i++) ans = max(ans, eval(i, bl));
return;
}
int am = (al+ar)/2;
pair<ld, int> best = make_pair(eval(am, bl), bl);
for(int i=bl; i<=br; i++){
best = max(best, make_pair(eval(am, i), i));
}
ans = max(ans, best.F);
if(al < am) consider(al, am-1, bl, best.S);
if(ar > am) consider(am+1, ar, best.S, br);
};
consider(0, n-1, 0, m-1);
cout << setprecision(12) << ans << "\n";
}
return 0;
}
This problem was inspired from Braess' Paradox which actually has cool applications. Tell your local representative that adding roads willy-nilly will not magically decrease traffic!
Idea: DanTheMan.
Preparation: DanTheMan.
Yatagarasu's Messages
What if $$$a_i \gt b_i$$$ for all $$$i$$$? What if $$$a_i \lt b_i$$$ for all $$$i$$$?
Classify messages into two types. Type one is where $$$a_i \lt \leq b_i$$$ and type two is where $$$a_i \gt b_i$$$. Type one messages should go before all type two messages. Within type one messages, sort in increasing order of $$$a_i$$$ and within type two messages, sort in decreasing order of $$$b_i$$$.
How can we condense information about subranges of messages? How do these ranges merge?
Let $$$t$$$ at any given point be the amount of time required to deliver all inscribed-but-undelivered messages. Look at the effect of a message on $$$t$$$ as a function $$$t_{out}(t_{in})$$$.
The function always takes the form $$$t_{out}(x) = max(a, t_{in} + b)$$$. How do those function compose? What does this tell you about subranges of consecutive messages?
What data structure can process moving elements around and merging subarrays?
Obviously, once a message is finished being inscribed, we should immediately start inscribing the next message. As messages finish being inscribed, we create a queue of messages to be delivered. Whenever there is a message in the queue, we should start delivering it. All that matters about the queue is the total amount of time to finish delivering all messages inside (call this value $$$t$$$). Thus, our only decision is the order in which to inscribe messages. The best way to order the messages with proof is covered at the end of this editorial.
Observe how $$$t$$$ changes from the moment before we start inscribing a message to the moment after we finish inscribing it. For each message, we can write $$$t_{out}$$$ as a function of $$$t_{in}$$$. This function is piecewise where $$$t_{out} = max(t_{in}-a_i, 0) + b_i$$$. We will call this function the unit function of garment i. Also, functions of this form are called ReLu (actually translated ReLu).
Next, observe how $$$t$$$ changes in between two consecutive messages being inscribed. $$$t_{out}$$$ as a function of $$$t_{in}$$$ becomes a composition of the unit functions of the two garments. Well, the composition of two ReLu’s is a Relu! This makes it very easy to merge things because any contiguous range of garments has a specific ReLu function which represents how processing that range of messages effects $$$t$$$. The total amount of time to inscribe and deliver all messages is the sum of all $$$a_i$$$ plus the value of the ReLu function for the whole array with $$$0$$$ plugged in
Now, how can we process updates? We use a treap. The binary search key will be the $$$(a_i, b_i)$$$ pair, which we can order using our to-be-described ordering rule. The heap key is randomized to make operations fast. Then, for each node, we also maintain the ReLu function of the segment of messages corresponding to the node’s subtree. To compute a ReLu of a subtree $$$i$$$, we compose the ReLu of the left child with the unit ReLu for $$$i$$$, and then we compose this ReLu with the ReLu of the right child.
Ordering procedure and proof We should order the messages according to Johnson's rule. Classify all messages into those with $$$a_i \leq b_i$$$ and $$$a_i \gt b_i$$$. Put all messages of the first type before all messages of the second type. Within messages of the first type, sort messages by $$$a$$$ in increasing order. Within messages of the second type, sort by $$$b$$$ in decreasing order.
We can prove Johnson's Rule with an exchange argument. Suppose there is a pair of adjacent messages that does not satisfy Johnson's Rule. We will show that if we swap that pair, then the overall time does not increase. Look at the ReLu of the two messages in each case. If you put message $$$i$$$ in front of message $$$j$$$ then the ReLu has vertex $$$(a_i + \max{(b_i - a_j, 0)}, b_j + \max{(a_j - b_i, 0)})$$$. If you put message $$$j$$$ in front of message $$$i$$$ then the ReLu has vertex $$$(a_j + \max{(b_j - a_i, 0)}, b_i + \max{(a_i - b_j, 0)})$$$.
A relu with vertex $$$(f,g)$$$ is no bigger over all inputs compared to a ReLu with vertex $$$(h,i)$$$ if $$$g \leq i$$$ and $$$g-f \leq i-h$$$. We need to show that the case where Johnson's rule is satisfied has both of these inequalities satisfied. This can be done by casework and doing out the algebra.
Time Complexity: O(n*logn)
#include<bits/stdc++.h>
using namespace std;
#define mp make_pair
#define F first
#define S second
typedef long long ll;
typedef pair<int,int> pi;
typedef pair<ll,ll> pl;
mt19937 rng(std::chrono::steady_clock::now().time_since_epoch().count());
struct treap{
// supports insertion and deletion in O(log n)
bool compare_times(pi x, pi y){ // returns true if x comes before y
if((x.F <= x.S) != (y.F <= y.S)) return x.F <= x.S;
if(x.F <= x.S) return x.F < y.F;
return x.S > y.S;
}
struct Node{
long double priority_key; // heap key
pi times; // (inscribe time, deliver time time) acts as bst with greedy ordering
pair<ll,ll> relu; // out_c = max(relu.S, relu.S + c_in - relu.F)
Node *left, *right;
Node(pi times_) :
priority_key(uniform_real_distribution<long double>(0.0, 1.0)(rng)),
times(times_),
relu(times_),
left(nullptr),
right(nullptr)
{}
pair<ll,ll> compose_relu(pair<ll,ll> r1, pair<ll,ll> r2){
if(r1.S < r2.F) return mp(r1.F+r2.F-r1.S, r2.S);
return mp(r1.F, r2.S+r1.S-r2.F);
}
void update_relu(){
if(left == nullptr) relu = times;
else relu = compose_relu(left->relu, times);
if(right != nullptr) relu = compose_relu(relu, right->relu);
}
};
Node* root = nullptr;
// divide subtree based on whether keys come before val
pair<Node*, Node*> split(Node* node, pi val){
if(node == nullptr) return mp(nullptr, nullptr);
if(compare_times(node->times, val)){ // split right tree
pair<Node*, Node*> x = split(node->right, val);
node->right = x.F;
node->update_relu();
return mp(node, x.S);
}
else{ // split left subtree
pair<Node*, Node*> x = split(node->left, val);
node->left = x.S;
node->update_relu();
return mp(x.F, node);
}
}
// returns root of joined, node1 has all keys less than node2
Node* join(Node* node1, Node* node2){
if(node2 == nullptr) return node1;
if(node1 == nullptr) return node2;
if(node1->priority_key > node2->priority_key){ // merge node2 into node1's right
node1->right = join(node1->right, node2);
node1->update_relu();
return node1;
}
else{ // merge node1 into node2's left
node2->left = join(node1, node2->left);
node2->update_relu();
return node2;
}
}
// returns root of inserted
Node* insert(Node* node, Node* toinsert){
if(node == nullptr) return toinsert;
if(toinsert->priority_key > node->priority_key){ // put toinsert here
pair<Node*, Node*> splits = split(node, toinsert->times);
toinsert->left = splits.F;
toinsert->right = splits.S;
toinsert->update_relu();
return toinsert;
}
else{
if(compare_times(toinsert->times, node->times)) node->left = insert(node->left, toinsert);
else node->right = insert(node->right, toinsert);
node->update_relu();
return node;
}
}
void insert(pi times_){
Node* toinsert = new Node(times_);
root = insert(root, toinsert);
}
Node* find(Node* node, pi times){
if(node == nullptr) return nullptr;
if(times == node->times) return node;
if(compare_times(times, node->times)) return find(node->left, times);
return find(node->right, times);
}
Node* erase(Node* node, Node*todelete){
if(node == nullptr) return nullptr;
if(node == todelete){
Node* res = join(node->left, node->right);
delete node;
return res;
}
if(compare_times(todelete->times, node->times)) node->left = erase(node->left, todelete);
else node->right = erase(node->right, todelete);
node->update_relu();
return node;
}
void erase(pi deletetimes){
Node* todelete = find(root, deletetimes);
if(todelete) root = erase(root, todelete);
}
ll query(pi oldtimes, pi newtimes){
erase(oldtimes);
insert(newtimes);
return (root->relu).S;
}
};
int main(){
cin.tie(0)->sync_with_stdio(0);
int test; cin >> test;
while(test--){
int n; cin >> n;
vector<pair<ll, ll>> a(n);
ll suma = 0;
treap T;
for(int i=0; i<n; i++){
cin >> a[i].F >> a[i].S;
T.insert(a[i]);
suma += a[i].F;
}
int q; cin >> q;
for(int qu=0; qu<q; qu++){
int i; cin >> i; i--;
pi p; cin >> p.F >> p.S;
suma += p.F - a[i].F;
cout << T.query(a[i], p) + suma << "\n";
a[i] = p;
}
}
return 0;
}
Idea: DanTheMan.
Preparation: DanTheMan.
Odin’s Defense
Create the entire triangular lattice within the shape.
What can you say about the orientation of all triangles on the border of a pointy polygon.
What happens when you remove all edges around a unit triangle?
Minimizing added perimeter is the same as maximizing the amount of deleted perimeter.
Make a graph where each triangle is a node and there is an edge between adjacent unit triangles.
The graph is bipartite. We want to find maximum independent set.
Call the required condition being pointy.
Start with the entire triangular lattice contained in the given region. This is a pointy partition since all unit triangles are pointy. Call the unit triangles that do not touch any edges of the region inner triangles. Consider what happens when you choose an inner triangle, and delete the three segments around this triangle. What’s left is still pointy! Now do it again, and again (we must choose inner triangles where all three of its segments are still present). Stilly pointy? How interesting!
The core of this problem is the following theorem: There is a bijection between (pointy partitions) and (sets of inner triangles such that no two inner triangles in the set share an edge). The proof of the theorem is at the end of this editorial.
Since we are minimizing the perimeter, we should maximize the number of segments we remove. Based on our theorem, we can now look for the maximum number of edge-independent inner triangles. Also, note that the triangular lattice is bipartite, because the orientation of every triangle shifts between adjacent triangles. So, the problem is now just maximum independent set on a bipartite graph! We can use Kuhn’s algorithm or convert to flows and use Dinic’s Algorithm to solve in O(n^2) or O(n \sqrt{n}). The answer is now just $$$\frac{3 \cdot \text{total number of triangles} + n}{2} - 3 * |\text{MIS}|$$$.
But now, given the array $$$a$$$, how do we construct the graph? First, we get coordinates for the latice vertices on the borders of the shape. Then, we get all triangles within the shape. For each row, we look at where the boundary crosses diagonally, and then add the triangles between each (\ … /) pair.
Then we get all triangles on the boundary. We determine whether the boundary is given clockwise or counter-clockwise based on the total angle traveled. If it’s CCW, we reverse the order. Then, we know that our right-hand is in the shape as we travel along the boundary. For each boundary segment, the triangles that our right hand is in are the boundary triangles.
The inner triangles are just the difference between the set of triangles in the shape and the triangles on the boundaries. To construct the edges of the graph, we look at each inner triangle, and check whether each of the three adjacent triangles are also inner triangles. This involves some casework on the orientation of the triangle.
For implementation, it helps a lot to have a good coordinate system. In my coordinate system, the y-coordinates were geometrically more spaced out than the x-coordinates.
Proof of the theorem:
First, notice that a polygon is pointy if and only if all of its boundary triangles are the same orientation (pointing up/down). This is because touching triangles of the same orientation make 60 or 180 or 300 degree angles with each other and touching triangles of different orientations make 120 or 240 degree angles with each other.
Now, let's prove that a segment-independent set of inner triangles, when removed, always results in a pointy polygon. Create an auxiliary graph: for each vertex (triangle) in the removed set, draw an edge between them if they are a distance of two away in the graph. Notice that the polygons formed by the set correspond to the components of the auxiliary graph. This is because, since it is an independent set, there are no vertices a distance one apart, and if a subset of triangles are a distance of three or more from all other triangles, then there will always be stray segments in between.
These connected components mean that all vertices in a connected component are of the same orientation. This is because of the parity of the distances and the bipartiteness of the lattice. This also means the border triangles are all the same orientation (opposite to the triangles in the connected component) because each border triangles is adjacent to a triangle in the connected component. Thus, the polygons are all pointy.
Next, let's prove that every pointy polygon has a segment-independent set of inner triangles that can be removed to create the partition. We can do this constructively by selecting all inner triangles that have orientation opposite to the border triangles of the polygon. This keeps the hull of the polygon the same. Also, it removes all segments inside because each segment inside the polygon is adjacent to exactly one of the selected triangles
This shows the bijection between pointy partitions and independent sets of inner triangles in the triangle graph.
#include<bits/stdc++.h>
using namespace std;
#define forn(i,n) for(int i=0;(i)<(n);i++)
#define sz(x) (int)x.size()
#define mp make_pair
#define F first
#define S second
#define pb push_back
typedef vector<int> vi;
typedef pair<int,int> pi;
int MIS(vector<vi> adj){
// assumes graph is bipartite
// usies kuhns to find maximum bipartite match then takes complement
int n = sz(adj);
// finds groupings
vi dist(n,-1);
queue<int> bfs;
forn(i,n){
if(dist[i]!=-1) continue;
dist[i] = 0;
bfs.push(i);
while(!bfs.empty()){
int node = bfs.front(); bfs.pop();
for(int c: adj[node]){
if(dist[c]==-1){
dist[c] = (1+dist[node])%2;
bfs.push(c);
}
}
}
}
vi match(n,-1);
vi augment;
vector<bool> vis(n);
//returns whether you can find an unsaturated node
function<bool(int)>dfs = [&](int node){
if(vis[node]) return false;
vis[node] = true;
if(dist[node]==1){
if(match[node]==-1 || dfs(match[node])){
augment.pb(node);
return true;
}
}
else{
for(int c: adj[node]){
if(dfs(c)){
augment.pb(node);
return true;
}
}
}
return false;
};
int ans = 0;
forn(i,n){
if(dist[i]==1 || match[i]!=-1) continue;
vis = vector<bool>(n);
augment.clear();
if(dfs(i)){
ans++;
for(int x = 0; x<sz(augment); x+=2){
match[augment[x]] = augment[x+1];
match[augment[x+1]] = augment[x];
}
}
}
return n - ans;
}
int main(){
cin.tie(0) -> sync_with_stdio(0);
int test; cin >> test;
forn(tc,test){
int n; cin >> n;
vi a(n);
forn(i,n){
cin >> a[i]; // a is an array of [-2, 2]
a[i] /= 60;
}
pi coord = mp(0,0);
int direction = 0; // 0 is up right, 1 is
vector<pi> vertices;
// find vertices and orientation
forn(i,n){
direction += a[i];
if(((direction%6)+6)%6 == 0){
coord.F += 2;
}
if(((direction%6)+6)%6 == 1){
coord.F += 1;
coord.S -= 1;
}
if(((direction%6)+6)%6 == 2){
coord.F -= 1;
coord.S -= 1;
}
if(((direction%6)+6)%6 == 3){
coord.F -= 2;
}
if(((direction%6)+6)%6 == 4){
coord.F -= 1;
coord.S += 1;
}
if(((direction%6)+6)%6 == 5){
coord.F += 1;
coord.S += 1;
}
vertices.pb(coord);
}
// wlog vertices in clockwise order, right hand is in shape
if(direction == -6){
forn(i,n/2) swap(vertices[i], vertices[n-1-i]);
}
// get boundary
vector<pi> boundary;
map<pi,bool> inboundary;
unordered_map<int,vi> occ;
// occ[y] is list of all x-coordinates that mark insides of that y-value
forn(i,n){
pi x = vertices[i], y = vertices[(i+1)%n];
if(y.F==x.F+2 && y.S==x.S){
if(!inboundary[mp(x.F+1, x.S-1)]){
boundary.pb(mp(x.F+1, x.S-1));
}
inboundary[mp(x.F+1, x.S-1)] = true;
}
if(y.F==x.F+1 && y.S==x.S-1){
if(!inboundary[(mp(x.F, x.S-1))]){
boundary.pb(mp(x.F, x.S-1));
}
inboundary[(mp(x.F, x.S-1))] = true;
occ[x.S-1].pb(x.F);
}
if(y.F==x.F-1 && y.S==x.S-1){
if(!inboundary[(mp(x.F-1, x.S-1))]){
boundary.pb(mp(x.F-1, x.S-1));
}
inboundary[(mp(x.F-1, x.S-1))] = true;
occ[x.S-1].pb(x.F-1);
}
if(y.F==x.F-2 && y.S==x.S){
if(!inboundary[(mp(x.F-1, x.S))]){
boundary.pb(mp(x.F-1, x.S));
}
inboundary[(mp(x.F-1, x.S))] = true;
}
if(y.F==x.F-1 && y.S==x.S+1){
if(!inboundary[(mp(x.F, x.S))]){
boundary.pb(mp(x.F, x.S));
}
inboundary[(mp(x.F, x.S))] = true;
occ[x.S].pb(x.F);
}
if(y.F==x.F+1 && y.S==x.S+1){
if(!inboundary[(mp(x.F+1, x.S))]){
boundary.pb(mp(x.F+1, x.S));
}
inboundary[(mp(x.F+1, x.S))] = true;
occ[x.S].pb(x.F+1);
}
}
vector<pi> interior;
map<pi, int> index;
for(pair<int,vi> p: occ){
sort(p.S.begin(), p.S.end());
for(int i = 0; i<sz(p.S); i+=2){
for(int j=p.S[i]; j<=p.S[i+1]; j++){
interior.pb(mp(j,p.F));
}
}
}
// nodes in triangles - boundary are graph
vector<pi> graph;
for(pi p: interior){
if(!inboundary[p]){
index[p] = sz(graph);
graph.pb(p);
}
}
// construct graph
int N = sz(graph);
vector<vi> adj(N);
forn(i,N){
int x = graph[i].F, y = graph[i].S;
if(!inboundary[mp(x-1,y)]){
adj[i].pb(index[mp(x-1,y)]);
}
if(!inboundary[mp(x+1,y)]){
adj[i].pb(index[mp(x+1,y)]);
}
if(((x+y)%2+2)%2 == 0){ // pointed down
if(!inboundary[mp(x, y+1)]){
adj[i].pb(index[mp(x,y+1)]);
}
}
else{ // pointed up
if(!inboundary[mp(x, y-1)]){
adj[i].pb(index[mp(x,y-1)]);
}
}
}
int mis = MIS(adj);
cout << (3*sz(interior)+n)/2 - 3*mis - n << "\n";
}
return 0;
}
Idea: DanTheMan.
Preparation: DanTheMan.
Fun Round
Who's My Mom?
All of the input possibilities are names of Greek Gods (specifically Olympians).
Google the name of each Olympian.
We should quickly realize that the input contains the name of a Greek God. The problem name tells us that we should print the name of the God's mother, according to mythology. We can google this, and print the answer for each case.
s = input()
if s == "Hephaestus" or s == "Ares":
print("Hera")
elif s == "Apollo" or s == "Artemis":
print("Leto")
else:
print("Rhea")
#include <bits/stdc++.h>
using namespace std;
int main() {
string s; cin >> s;
if(s == "Hephaestus" || s == "Ares") cout << "Hera";
else if(s == "Apollo" || s == "Artemis") cout << "Leto";
else cout << "Rhea";
return 0;
}
Idea: DanTheMan.
Preparation: DanTheMan.
Slide, Slide, Slippery Slide
The output suggests that this is some kind of hand game.
If you don't know the game, you can search up "Slide, Slide, Slippery Slide clap missing 5". In general, just try cramming every clue in the problem into one google query.
This problem is a reference to the children's hand game: Slide, Slide, Slippery Slide, if you say the number 5 you will be disqualified. In the game, two players go back and forth counting upward, but they must skip all numbers that have a 5 in their decimal representation. The task is to implement this counting.
If you don't know the game, you can just search up "Slide, Slide, Slippery Slide clap missing 5" and you will get this tiktok which shows you how it works. Apparently other games that are also known by the same name, so sorry if you were misled by other google results.
n = int(input())
for i in range(1, n+1):
if not('5' in str(i)):
print("Clap", i)
#include <bits/stdc++.h>
using namespace std;
int main() {
int n; cin >> n;
for(int i=1; i<=n; i++){
if(to_string(i).find('5') == string::npos) cout << "Clap " << i << "\n";
}
return 0;
}
Idea: DanTheMan.
Preparation: DanTheMan.
Aak
Note everything that the limerick references: turtle, python, counting, program, pen.
The problem is loosely about the Python turtle module.
Aak is the mayan word for turtle
The problem is about mayan numerals
The references to turtle, python, program, pen, and the example inputs clues that the problem is about the python turtle. But what is it drawing? To get more clues, search up "Aak turtle" to realize that the problem has something to do with Mayans. Then, the counting and the example outputs clues that the problem is about numbers. Search up mayan numbers to get pictures of the different numerals.
The input are instructions for a python turtle to create drawings of a mayan numeral. You have to decipher this and turn it into a number. Note that not all forward commands are the same length.
def read_cell():
res = 0
rows = int(input())
for row in range(rows):
inp = input().split()
for i in range(len(inp)):
if inp[i] == "down":
if inp[i+1] == "up":
res += 1
else:
res += 5
return res
ans = 0
cells = int(input())
for cell in range(cells):
ans *= 20
ans += read_cell()
print(ans)
#include <bits/stdc++.h>
using namespace std;
int read_cell() {
int res = 0;
int rows; cin >> rows;
cin.ignore();
for (int r = 0; r < rows; ++r) {
string line;
getline(cin, line);
stringstream ss(line);
vector<string> inp;
string word;
while (ss >> word) inp.push_back(word);
for (int i = 0; i < (int)inp.size(); ++i) {
if (inp[i] == "down") {
if (inp[i + 1] == "up") res += 1;
else res += 5;
}
}
}
return res;
}
int main() {
int ans = 0;
int cells; cin >> cells;
for (int cell = 0; cell < cells; ++cell) {
ans *= 20;
ans += read_cell();
}
cout << ans << endl;
return 0;
}
Idea: DanTheMan.
Preparation: DanTheMan.








Thanks for epic problems!
Kane's Tree was a truly beautiful problem :)
Erm akshually its Kāne's Tree. Still a genuinely delightful problem :)
Did you know that the tree you get after applying k sproutings represents the von neumann construction of the naturals? This also shows that if you have a type-x tree and replace each node with a type-y tree you get a type-(x+y) tree which is pretty cool
My favorite as well... :DDD
Great problems! dantheman orz