Reminder: We welcome you to participate in the official DIV1/DIV2 round scheduled on the 31st!
Idea:frostcat
We can see if $$$x$$$ is a perfect square, $$$x^x$$$, $$$x^{(x^x)}$$$, $$$\ldots$$$ are also perfect square.
Proof: assume $$$x=y^2$$$, $$$x^z=(y^z) \cdot (y^z)$$$.
So the answer is the number of perfect squares not greater than $$$n$$$.
There are exactly $$$\lfloor \sqrt{n} \rfloor$$$ integers $$$x$$$ (with $$$1 \leq x \leq n$$$) such that $$$x$$$ is a mystic number.
#include <iostream>
#include <cmath>
using namespace std;
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
cout << static_cast<int>(sqrt(n)) << '\n';
}
return 0;
}
Idea:frostcat
If $$$n \leq 3$$$, it's impossible to satisfy the conditions, so the answer is $$$-1$$$.
For $$$n \geq 4$$$, you can construct a valid permutation like $$$[n-1, n-2,. . ., 1, 3, 2, 4]$$$.
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin>>t;
while(t--)
{
int n;
cin>>n;
if(n <= 3)
{
cout<<"-1\n";
continue;
}
while(n >= 5)
cout<<n<<' ',--n;
cout<<"1 3 2 4\n";
}
}
Idea:Snowdust
Let's break the possible values of m (the count of colorfull subarrays) into ranges:
Case 1: m == n
- We only need the n single-element subarrays, each trivially distinct.
- Output any array with all equal elements like
[1, 1, 1, ..., 1]. - This ensures no subarray of length > 1 has all distinct elements.
- Total uniqueness value:
n. (This is also theminimum mwe can achieve)
Case 2: n < m < 2n
- We already get $$$n$$$ uniqueness from 1-element subarrays.
- We now need $$${m - n}$$$ more distinct subarrays of length 2.
- Example: $$$[1, 2, 1, 2]$$$
Subarrays with indices: $$$[0,1]$$$, $$$[1,2]$$$, $$$[2,3]$$$ contribute 1 to the count → 3 subarrays of length 2 with distinct elements. - In general, the maximum number of 2-length colorful subarrays we can form in an array of size n is
n - 1. - Hence, we can reach up to
m = 2n - 1using only1and2, and we can alter some prefix/suffix of our array to get any m in this range.
Case 3: m ≥ 2n
- We introduce the third element
3to pushf(a)further. - Similar construction: start with
[1, 2, 3], then alternate like[1,2,3,1,2,...]to get more distinct subarrays. - Try analyzing how many new colorful subarrays we get of lengths 2 and 3.
- (Try to prove this part yourself): The maximum uniqueness we can achieve is
3n - 3, and any value can be achieved in between. (This is themaximum mwe can achieve) - You can check the prove from the HARD VERSION
So, for m < n or m > 3n - 3 it is not possible!
We know that any array of length $$$n$$$ automatically contributes $$$n$$$ colorful subarrays and so $$$m_{\min}=n$$$ (the single-element ones).
Step 1: Maximum possible $$$m$$$
We want to construct the array to maximize the number of colorful subarrays.
Consider a repeating sequence:
repeated until the array length is $$$n$$$. The total length will be:
Now, examine all subarrays of length $$$p$$$ (a sliding window): - If $$$1 \leq p \leq k$$$: all elements can be distinct (no repetition within the window). - For each such $$$p$$$, there are $$$n - p + 1$$$ windows. - If $$$p \gt k$$$: by the Pigeonhole Principle, at least one element repeats.
Thus, the total number of colorful subarrays is:
Simplifying:
If $$$m \gt m_{\max}(k)$$$ for every $$$k \leq n$$$, it is impossible.
Step 2: Finding the right $$$k$$$
Define:
Find the smallest $$$k$$$ such that:
This means: - $$$k-1$$$ colors cannot reach $$$m$$$, - but $$$k$$$ colors can generate enough colorful subarrays.
Step 3: Constructing the array
Start with the repeating pattern,
This creates exactly $$$\text{bound}[k]$$$ colorful subarrays. If $$$m \lt \text{bound}[k]$$$, we modify the last few elements (by repeating some values smartly) to reduce the number of distinct subarrays until exactly $$$m$$$ remains.
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
using namespace __gnu_pbds;
using namespace std;
template <typename T>
using os = tree<T, null_type, less<T>, rb_tree_tag, tree_order_statistics_node_update>;
template <typename T>
using oms = tree<T, null_type, less_equal<T>, rb_tree_tag, tree_order_statistics_node_update>;
#define mod 1000000007
#define mod2 998244353
#define ll long long
#define bl __int128_t
#define pb push_back
#define all(v) v.begin(),v.end()
#define bs binary_search
#define rall(v) v.rbegin(),v.rend()
#define lb lower_bound
#define ub upper_bound
#define pl pair<ll,ll>
#define f(i,n) for(ll i=0;i<n;i++)
#define inp(a,n) vector<ll> a(n); f(i,n) cin>>a[i]
#define pyes cout<<"YES\n"
#define pno cout<<"NO\n"
using Mat = array<array<ll,2>,2>;
#pragma GCC optimize("unroll-loops")
#pragma gcc optimize("Ofast")
#pragma GCC optimization("Ofast")
#pragma optimize(Ofast)
#pragma GCC optimize("O3,unroll-loops")
#pragma GCC target("popcnt")
ll power(ll x,ll y,ll z=LLONG_MAX){
ll res=1;
while(y>0){
if(y%2) res=(res*x)%z;
y/=2;
if(y) x=(x*x)%z;
}return res%z;
}
ll gcd(ll a,ll b,ll& x,ll& y){
x=1,y=0;
ll x1=0,y1=1,a1=a,b1=b;
while(b1){
ll q=a1/b1;
tie(x,x1)=make_tuple(x1,x-q*x1);
tie(y,y1)=make_tuple(y1,y-q*y1);
tie(a1,b1)=make_tuple(b1,a1-q*b1);
}return a1;
}
int main(){
ios_base::sync_with_stdio(false); cin.tie(NULL);
ll t=1,i,j,n,m,z=1e9+7,z2=998244353,k,x,y,q;
// sieve(); // for SPF
cin>>t;
ll tt=t;
string s,s2;
while(tt--){
ll n, m, k;
cin >> n >> m >> k;
if(m < n || m > 3 * n){
cout << -1 << '\n';
continue;
}
m -= n;
vector<ll> v;
v.pb(2);
v.pb(1);
for(i = 1; i < n; i++){
if(m == 0){
v.pb(v.back());
}
else if(m == 1 || i == 1){
v.pb(v[i - 1]);
m--;
}
else{
v.pb(v[i] ^ v[i - 1]);
m -= 2;
}
}
if(m > 0){
cout << -1 << '\n';
continue;
}
f(i, n) cout << v[i + 1] << ' ';
cout << '\n';
}
return 0;
}
#include <bits/stdc++.h>
using namespace std ;
#define int long long
int a[1000100];
void solve() {
int n, m, k, mxk = 0;
cin >> n >> m >> k;
for(int i=1;i<=n;i++)
mxk+=min(i,k);
if(m < n || m > mxk){
cout << -1 << "\n";
return;
}
if (n == m) {
for (int i = 0; i < n; i++)
cout << 1 << " ";
cout << "\n";
return;
}
auto bound = [&](int p){
return n*p - p*(p-1)/2;
};
for(int i = 1; i < k; ++i){
if(bound(i) < m && bound(i+1) >= m){
int kup = i+1;
int cnt = 1;
for(int j = 0; j < kup; ++j) a[j] = j+1;
int ext = m-bound(i);
int put = kup;
for(int iter = 0; iter < ext-1; ++iter){
a[put] = 1+put%kup;
put++;
}
while(put<n){
a[put] = a[put-(kup-1)];
put++;
}
}
}
for(int i = 0; i < n; ++i){
cout << a[i] << " ";
}
cout << "\n";
}
int32_t main() {
ios_base::sync_with_stdio(false);
cin.tie(0); cout.tie(0);
int tc = 1;
cin >> tc;
while(tc-- > 0) solve();
return 0;
}
Idea:Davy_D._Kaosar
Key Insight
For a given sum $$$s = a + b$$$:
The condition $$$\gcd(a, b) = 1$$$ simplifies to $$$\gcd(a, s) = 1$$$ because $$$b = s - a$$$
$$$b \geq 2$$$ implies $$$a \leq s - 2$$$
Thus, for each $$$s$$$, the number of valid integers $$$a$$$ in $$$[1, s-2]$$$ with $$$\gcd(a, s) = 1$$$ is given by Euler's totient function $$$\phi(s)$$$ minus $$$1$$$ (to exclude $$$a = s-1$$$ which would make $$$b=1$$$).
Algorithm Selection
Precompute Euler's Totient Function: Compute $$$\phi(s)$$$ for all $$$s \leq 10^6$$$ using sieve method.
Prefix Sum Array: Build array $$$P$$$ where $$$P[s] = P[s-1] + (\phi(s) - 1)$$$ for $$$s \geq 2$$$ ($$$P[1] = 0$$$).
Efficient Query Processing: Answer each query $$$(l, r)$$$ as $$$P[r] - P[l-1]$$$.
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
using namespace __gnu_pbds;
using namespace std;
template <typename T>
using os = tree<T, null_type, less<T>, rb_tree_tag, tree_order_statistics_node_update>;
template <typename T>
using oms = tree<T, null_type, less_equal<T>, rb_tree_tag, tree_order_statistics_node_update>;
#define mod 1000000007
#define mod2 998244353
#define ll long long
#define bl __int128_t
#define pb push_back
#define all(v) v.begin(),v.end()
#define bs binary_search
#define rall(v) v.rbegin(),v.rend()
#define lb lower_bound
#define ub upper_bound
#define pl pair<ll,ll>
#define f(i,n) for(ll i=0;i<n;i++)
#define inp(a,n) vector<ll> a(n); f(i,n) cin>>a[i]
#define pyes cout<<"YES\n"
#define pno cout<<"NO\n"
using Mat = array<array<ll,2>,2>;
#pragma GCC optimize("unroll-loops")
#pragma gcc optimize("Ofast")
#pragma GCC optimization("Ofast")
#pragma optimize(Ofast)
#pragma GCC optimize("O3,unroll-loops")
#pragma GCC target("popcnt")
ll power(ll x,ll y,ll z=LLONG_MAX){
ll res=1;
while(y>0){
if(y%2) res=(res*x)%z;
y/=2;
if(y) x=(x*x)%z;
}return res%z;
}
ll gcd(ll a,ll b,ll& x,ll& y){
x=1,y=0;
ll x1=0,y1=1,a1=a,b1=b;
while(b1){
ll q=a1/b1;
tie(x,x1)=make_tuple(x1,x-q*x1);
tie(y,y1)=make_tuple(y1,y-q*y1);
tie(a1,b1)=make_tuple(b1,a1-q*b1);
}return a1;
}
#define MX 1e6+6
vector<ll> spf(MX+2,1);
vector<ll> gpf(MX+2,0);
void sieve(){
spf[0]=0;
for(ll i=2;i<3;i++){
if(spf[i]==1){
for(ll j=i;j<MX+1;j+=i) if(spf[j]==1) spf[j]=i;
}
}for(ll i=3;i<MX+1;i+=2){
if(spf[i]==1){
for(ll j=i;j<MX+1;j+=i) if(spf[j]==1) spf[j]=i;
}
}for(ll i=2;i<=MX;i++) if(gpf[i]==0) for(ll j=i;j<=MX;j+=i) gpf[j]=i;
return;
}
vector<ll> dp(MX, 0);
void func(){
dp[3] = 1;
for(ll i = 4; i < MX; i++){
vector<ll> pr;
ll x = i;
while(x > 1){
ll y = spf[x];
while(x % y == 0) x /= y;
pr.pb(y);
}
ll val = i - 2, sz = pr.size();
ll ans = 0;
f(j, (1ll << sz)){
ll pro = 1, cnt = 0;
f(k, sz) if((1ll << k) & j) pro *= pr[k], cnt++;
if(cnt % 2) ans -= (val / pro);
else ans += (val / pro);
}
dp[i] = dp[i - 1] + ans;
}
}
int main(){
ios_base::sync_with_stdio(false); cin.tie(NULL);
ll t=1,i,j,n,m,z=1e9+7,z2=998244353,k,x,y,q;
sieve(); // for SPF
func();
cin>>t;
ll tt=t;
string s,s2;
while(tt--){
ll l, r;
cin >> l >> r;
cout << dp[r] - dp[l - 1] << '\n';
}
return 0;
}
We define a leaf node as a node with only one incident edge (degree 1).
Algorithm Steps:
- If any node has >2 leaves, the answer is impossible (-1), because at least one leaf will remain undeleted.
- If any node has exactly 2 leaves (l₁ and l₂),**delete the path between l₁ and l₂**.
- Repeat steps 1–2 until no more changes occur.
- If all remaining nodes have ≤1 leaves, remove one leaf along with its neighbor, then go back to step 1.
Why is this algorithm correct? Let's analyze: we define the connectivity component of "fine" as having no node has>2 leaves, and at most one node has 2 leaves. We will execute step 1 on a "fine" connected component, which remains a "fine" connected component. We define a "good" connectivity component as having no node has>1 leaves. We will execute step 4 on a "good" connected component, which becomes a "good" or "fine" connected component. Through induction, we can prove it. After the first execution of steps 1 and 2, if no node has>2 leaves, then there must be a solution, because after that, all connected components are "fine".
#include <iostream>
#include <vector>
#include <queue>
#include <set>
using namespace std;
vector<vector<int>> g;
vector<int> deg;
vector<bool> deleted;
vector<pair<int, int>> operations;
int n;
bool checkImpossible() {
for (int u = 1; u <= n; ++u) {
int leaf_neighbors = 0;
for (int v : g[u]) {
if (deg[v] == 1) {
leaf_neighbors++;
}
}
if (leaf_neighbors > 2) {
return true;
}
}
return false;
}
void deleteTwoLeaves() {
while (true) {
bool found = false;
for (int u = 1; u <= n; ++u) {
if (deleted[u]) continue;
int leaf_neighbors = 0;
vector<int> leaves;
for (int v : g[u]) {
if (!deleted[v] && deg[v] == 1) {
leaf_neighbors++;
leaves.push_back(v);
}
}
if (leaf_neighbors == 2) {
operations.push_back({leaves[0], leaves[1]});
deleted[leaves[0]] = deleted[leaves[1]] = deleted[u] = true;
for (int v : g[u]) {
if (!deleted[v]) {
deg[v]--;
}
}
found = true;
break;
}
}
if (!found) break;
}
}
bool checkRemainingNodes() {
for (int u = 1; u <= n; ++u) {
if (!deleted[u] && deg[u] == 0) {
return true;
}
}
for (int u = 1; u <= n; ++u) {
if (deleted[u]) continue;
int leaf_neighbors = 0;
for (int v : g[u]) {
if (!deleted[v] && deg[v] == 1) {
leaf_neighbors++;
}
}
if (leaf_neighbors > 2) {
return true;
}
}
return false;
}
void deleteRemainingNodes() {
while (true) {
bool all_deleted = true;
for (int u = 1; u <= n; ++u) {
if (!deleted[u]) {
all_deleted = false;
break;
}
}
if (all_deleted) break;
bool found = false;
for (int u = 1; u <= n; ++u) {
if (deleted[u]) continue;
int leaf_neighbors = 0;
vector<int> leaves;
for (int v : g[u]) {
if (!deleted[v] && deg[v] == 1) {
leaf_neighbors++;
leaves.push_back(v);
}
}
if (leaf_neighbors == 2) {
operations.push_back({leaves[0], leaves[1]});
deleted[leaves[0]] = deleted[leaves[1]] = deleted[u] = true;
for (int v : g[u]) {
if (!deleted[v]) {
deg[v]--;
}
}
found = true;
break;
}
}
if (found) continue;
for (int u = 1; u <= n; ++u) {
if (!deleted[u] && deg[u] == 1) {
for (int v : g[u]) {
if (!deleted[v]) {
operations.push_back({u, v});
deleted[u] = deleted[v] = true;
for (int w : g[v]) {
if (!deleted[w]) {
deg[w]--;
}
}
found = true;
break;
}
}
if (found) break;
}
}
if (!found) {
cout << -1 << endl;
return;
}
}
}
void solve() {
cin >> n;
g.assign(n + 1, vector<int>());
deg.assign(n + 1, 0);
deleted.assign(n + 1, false);
operations.clear();
for (int i = 0; i < n - 1; ++i) {
int u, v;
cin >> u >> v;
g[u].push_back(v);
g[v].push_back(u);
deg[u]++;
deg[v]++;
}
if (checkImpossible()) {
cout << -1 <<endl;
return;
}
deleteTwoLeaves();
if (checkRemainingNodes()) {
cout << -1 <<endl;
return;
}
deleteRemainingNodes();
cout << operations.size() << endl;
for (auto op : operations) {
cout << op.first << " " << op.second << endl;
}
}
int main() {
int t;
cin >> t;
while (t--) {
solve();
}
return 0;
}
Idea:wuhudsm
1. Observation
The product of:
— Number of non-leaf nodes, and
— Minimum degree among non-leaf nodes
is $$$O(n)$$$.
2. Dynamic Programming (DP) State Definition
- State:
dp[v][j][k][m]where: v: Current node.j: Sum of values for all nodes in the subtree rooted atv.k: Binary weight of nodev(0 or 1).m: Binary weight ofv's parent (0 or 1).Recurrence Relation:
with the constraint $$$\sum t_x = j - k - m$$$ over all children $$$\text{son}_x$$$.
3. Critical Optimization
- Leaf Node Skipping:
Direct computation of this DP would lead to TLE. Instead: - Skip leaf nodes during DP traversal.
- This reduces redundant calculations by exploiting tree sparsity.
4. Final Complexity
- Optimized runtime: $$$O(n \sqrt{n})$$$
Thanks Ming_Xu for this solution.
TheForces #43: F (Equal Node Sum)
Note that the sum of all degrees in a tree is exactly $$$2 \cdot (n - 1) = 2n - 2$$$.
Observation 1: Observe that "For each non-leaf node, its sum value is equal to the sum value of every other non-leaf node." means that all non-leaf vertices must have the same sum value. The maximum sum value of a vertex is the degree of the vertex (which is maximized if all incident edges have weight $$$1$$$).
Let $$$\mathcal X := { v \in V \; \vert \; v\text{ is not a leaf } }$$$ (all non-leaf vertices) and let $$$\alpha := \min_{u \in \mathcal X} \mathrm{deg}(u)$$$ (min degree among all non-leaf vertices). Clearly, the maximum possible sum value must be less than or equal to $$$\alpha$$$ by Observation 1. Additionally, $$$|\mathcal X| \cdot \alpha \le 2n - 2$$$, since $$$|\mathcal X| \cdot \alpha$$$ counts a subset of all "degrees".
So the idea is as follows:
For all $$$k \in [0; \alpha]$$$, we calculate the number of different assignments where the sum value of all non-leaf vertices is equal to $$$k$$$.
The answer for $$$k = 0$$$ is trivial (set everything to $$$0$$$ — that's the only assignment), so let's fix $$$k \in [1; \alpha]$$$. Let $$$f(u, a)$$$ be the number of ways to assign weights to edges within the subtree of u such that all non-leaf descendants of $$$u$$$ have a sum value of $$$k$$$, and the sum of weights on edges from $$$u$$$ to its children is $$$a$$$. This means that if $$$a = k$$$, we don't need to set the edge weight to the parent of $$$u$$$ to $$$1$$$. Conversely, if $$$a = k - 1$$$, we would still need to set the weight of the edge to the parent to $$$1$$$. Let's call the number of such assignments $$$f(u, a)$$$.
For now, let's pretend that $$$f(u, a) = 1$$$ for all leaves (there's one (empty) assignment of the (empty) subtree of the leaf where we don't need to set the weight of the edge to the parent of $$$u$$$ to $$$1$$$ and there's also one assignment where we set the weight to $$$1$$$). Let $$$\mathcal C_u$$$ be the (direct) children of $$$u$$$. Assume that we knew $$$f(v, a)$$$ for all children $$$v \in \mathcal C_u$$$ — we could then calculate $$$f(u, a)$$$:
If we "take" a child which currently only has a sum value of $$$k - 1$$$, then we will set the weight to $$$1$$$ of the edge from that child to $$$u$$$, i.e. also increasing the sum value of $$$u$$$ by one. In total, we need to do this $$$a$$$ times to have a sum value of $$$a$$$. Doing this for all subsets (with size $$$a$$$) of $$$\mathcal C_u$$$ is too slow — instead, we can use a $$$\mathcal O(|\mathcal C_u| \cdot (k + 1))$$$ DP to calculate $$$f(u, k - 1)$$$ and $$$f(u, k)$$$ (simultaneously):
Let $$$d(i, j)$$$ denote the number of different assignments among the first $$$i$$$ children, such that the first $$$i$$$ children have a sum value of $$$k$$$ (or are leaves) and such that the current sum value of $$$u$$$ is $$$j$$$ (i.e. we set the weight of $$$j$$$ edges, to the first $$$i$$$ children, to $$$1$$$). The base case is $$$d(0, 0) = 1$$$. For $$$i \gt 0$$$, we have:
Unfortunately, this will be too slow. Observe that we essentially want to compute the following:
This can be easily seen from the DP transitions. Using divide-and-conquer and truncating the polynomials to the first $$$k + 1$$$ coefficients in intermediate steps, we can calculate $$$f(u, a)$$$ for $$$a \in [k - 1, k]$$$ in $$$\mathcal O(|\mathcal C_u| \log^2 |\mathcal C_u|)$$$ (keep in mind that $$$k \in \mathcal O(|\mathcal C_u|)$$$, so $$$k$$$ won't show up here).
If we naïvely implement such a DFS, we won't be able to use the fact that $$$|\mathcal X| \cdot \alpha \le 2n - 2$$$, since we will visit all leaf vertices in the worst case (e.g. consider the star graph; here, $$$\alpha \in [1; n - 1]$$$, but we would visit every vertex $$$\mathcal O(n)$$$ times!).
To circumvent this, we perform two small optimizations:
1) Simply delete all leaf vertices, but remember how many children were leafs for every non-leaf vertex. Let's call this number $$$L_u$$$.
2) Now we need to fix our calculation of $$$f(u, a)$$$, since we are not allowed to directly use the polynomials of the leaf vertices anymore. Let $$$\mathcal P$$$ denote the product of the polynomials that are non-leaf vertices. Then, remember that for leaf vertices, our polynomial was $$$1 + x$$$, so the product of the polynomials of all leaf vertices is $$$\mathcal Q = (1 + x)^{L_u} = \sum_{i = 0}^{L_u} \binom{L_u}{i} x^i$$$. Since $$$f(u, a) = [x^a](P \cdot Q)$$$, we need to find the coefficients of $$$x^a$$$ in $$$P \cdot Q$$$. Luckily, a simple for-loop is enough here (since we only need two coefficients).
Let's analyze the time complexity:
1) Preprocessing can be done in $$$\mathcal O(n)$$$ time 2) To find the number of different assignments for a fixed $$$k$$$, we only need to consider non-leaf vertices:
Since $$$|\mathcal C_u| \ge \alpha \ge k$$$, we know that $$$k = \mathcal O(|\mathcal C_u|)$$$:
Now, summing over all $$$k \in [1; \alpha]$$$:
Thus, the final time complexity is $$$\mathcal O(n \log^2 n)$$$.
$$$(1)$$$: We iterate over all coefficients of $$$\mathcal P$$$ of $$$x^i$$$ for $$$i \le k$$$ and calculate its contribution to $$$x^a$$$. This only involves multiplying that coefficient with the unique coefficient $$$[x^{a - i}] \mathcal Q$$$, which is a binomial coefficient. Thus, this contribution for a fixed coefficient of $$$\mathcal P$$$ is found in (amortized) $$$\mathcal O(1)$$$ time. Calculating $$$[x^a] (\mathcal P \cdot \mathcal Q)$$$ thus takes $$$\mathcal O(\min { k, |\mathcal C_u \cap \mathcal X| }) = \mathcal O(|\mathcal C_u \cap \mathcal X|)$$$ time.
$$$(2)$$$: Note that $$$\sum_{x \in \mathcal X} |\mathcal C_u \cap \mathcal X|$$$ is equivalent to the degree count between two non-leaf vertices, i.e. is two times the number of edges between non-leaf vertices. Since the non-leaf vertices form a tree, $$$\sum_{x \in \mathcal X} |\mathcal C_u \cap \mathcal X| = 2|\mathcal X| - 2 \le \mathcal O(|\mathcal X|)$$$.
#include <map>
#include <set>
#include <cmath>
#include <ctime>
#include <queue>
#include <stack>
#include <cstdio>
#include <cstdlib>
#include <vector>
#include <cstring>
#include <algorithm>
#include <iostream>
#include <bitset>
using namespace std;
typedef double db;
typedef long long ll;
typedef unsigned long long ull;
const int N=1000010;
const int LOGN=28;
const ll TMD=998244353;
const ll INF=2147483647;
int T,n,min_deg;
vector<int> G[N];
vector<ll> dp[N][2];
//-------------------------------------------------
ll fac[N],inv[N];
ll pw(ll x,ll p)
{
if(!p) return 1;
ll y=pw(x,p>>1);
y=(y*y)%TMD;
if(p&1) y=(y*(x%TMD))%TMD;
return y;
}
void init()
{
fac[0]=1;
for(int i=1;i<N;i++) fac[i]=(fac[i-1]*i)%TMD;
inv[N-1]=pw(fac[N-1],TMD-2);
for(int i=N-2;i>=0;i--) inv[i]=(inv[i+1]*(i+1))%TMD;
}
ll C(int n,int m)
{
if(n<m||n<0||m<0) return 0;
return fac[n]*inv[m]%TMD*inv[n-m]%TMD;
}
ll A(int n,int m)
{
if(n<m||n<0||m<0) return 0;
return fac[n]*inv[n-m]%TMD;
}
ll INV(ll x)
{
return pw(x,TMD-2);
}
//-------------------------------------------------
void DP(int x,int fa)
{
if(x!=1&&G[x].size()==1) return ;
for(int i=0;i<=1;i++)
{
dp[x][i].clear();
dp[x][i].resize(min_deg+1);
}
int cl=0;
vector<int> nl;
for(int i=0;i<G[x].size();i++)
{
int y=G[x][i];
if(y==fa) continue;
DP(y,x);
if(G[y].size()==1) cl++;
else nl.push_back(y);
}
for(int sum=0;sum<=min_deg;sum++)
{
for(int b=0;b<=1;b++)
{
//int cl=0,cnl=0;
vector<ll> dps(min((int)nl.size(),min_deg)+2),tmp(min((int)nl.size(),min_deg)+2);
dps[b]=1;
for(int i=0;i<nl.size();i++)
{
int y=nl[i];
tmp[0]=dps[0]*dp[y][0][sum]%TMD;
for(int j=1;j<=min(i+1,min_deg)+1;j++)
tmp[j]=(dps[j-1]*dp[y][1][sum]+dps[j]*dp[y][0][sum])%TMD;
for(int j=0;j<=min(i+1,min_deg)+1;j++) dps[j]=tmp[j];
}
for(int j=0;j<=min((int)nl.size(),min_deg)+1;j++)
dp[x][b][sum]=(dp[x][b][sum]+dps[j]*C(cl,sum-j))%TMD;
//
//for(int j=0;j<=min(cnl+1,min_deg);j++)
// printf("dp[%d][%d][%d]=%I64d\n",x,b,sum,dp[x][b][sum]);
//
}
}
}
int main()
{
//freopen("test.in","r",stdin);
init();
scanf("%d",&T);
while(T--)
{
scanf("%d",&n);
for(int i=1;i<=n;i++) G[i].clear();
for(int i=1;i<n;i++)
{
int u,v;
scanf("%d%d",&u,&v);
G[u].push_back(v);
G[v].push_back(u);
}
min_deg=G[1].size();
for(int i=2;i<=n;i++)
if(G[i].size()!=1) min_deg=min(min_deg,(int)G[i].size());
DP(1,0);
ll ans=0;
for(int i=0;i<dp[1][0].size();i++) ans=(ans+dp[1][0][i])%TMD;
printf("%I64d\n",ans);
}
//fclose(stdin);
return 0;
}
fn main() {
let _io = io();
for _ in 0..r!(U) {
let n = r!(U);
let mut edges = vec![vec![]; n];
for _ in 1..n {
let (u, v) = r!(usize, usize);
edges[u - 1].push(v - 1);
edges[v - 1].push(u - 1);
}
for u in 0..n {
edges[u].sort_unstable();
}
fn dfs1(u: U, p: U, edges: &mut Vec<Vec<U>>, leafs: &mut Vec<U>) -> U {
if edges[u].len() == 1 && edges[u][0] == p {
return U::MAX;
}
let mut mn = U::MAX;
let mut tbr = vec![];
for &v in &edges[u].clone() {
if v != p {
let x = dfs1(v, u, edges, leafs);
if x == U::MAX {
tbr.push(v)
}
mn = mn.min(x);
}
}
leafs[u] = tbr.len();
tbr.push(U::MAX);
let mut new = vec![];
let mut i = 0;
let mut j = 0;
while i < edges[u].len() {
if edges[u][i] == tbr[j] {
i += 1;
j += 1;
} else if edges[u][i] < tbr[j] {
new.push(edges[u][i]);
i += 1;
} else {
j += 1;
}
}
edges[u] = new;
mn.min(edges[u].len() + leafs[u])
}
let mut leafs = vec![0; n];
let mx = dfs1(0, 0, &mut edges, &mut leafs);
fn dfs2(u: U, p: U, edges: &Vec<Vec<U>>, leafs: &Vec<U>, k: U) -> (N, N) {
// (s == k, s == k - 1)
let mut polys = vec![vec![N::ONE]];
for &v in &edges[u] {
if v != p {
let (a, b) = dfs2(v, u, edges, leafs, k);
polys.push(vec![a, b]);
}
}
fn work(mut polys: Vec<Vec<N>>, k: U) -> Vec<N> {
if polys.is_empty() {
vec![]
} else if polys.len() == 1 {
polys.remove(0)
} else {
let m = polys.len() >> 1;
let new = polys.split_off(m);
let r1 = work(polys, k);
let r2 = work(new, k);
let mut res = multiply(r1, r2);
res.truncate(k + 2);
res
}
}
let mut n1 = N::ZERO;
let mut n0 = N::ZERO;
let leafs = leafs[u];
for (i, r) in work(polys, k).into_iter().enumerate() {
for (j, nn) in [(k, &mut n0), (k - 1, &mut n1)] {
if i > j {
continue;
}
let d = j - i;
*nn += r * leafs.choose(d);
}
}
(n0, n1)
}
let mut ans = N::ONE;
for k in 1..=mx {
ans += dfs2(0, 0, &edges, &leafs, k).0;
}
w!(ans);
}
}
static mut SEEN: Vec<(N, N)> = Vec::new();
#[allow(static_mut_refs)]
fn get_seen() -> &'static mut Vec<(N, N)> {
unsafe { &mut SEEN }
}
/// use `init_seen` to get initial "dp" memoization table, doubles size on miss (why not)
pub trait Combinatorics {
fn choose(self, k: usize) -> N;
fn fact(self) -> N;
fn inv_fact(self) -> N;
}
impl Combinatorics for usize {
fn choose(self, k: usize) -> N {
if self < k {
N::ZERO
} else {
self.fact() * (self - k).inv_fact() * k.inv_fact()
}
}
fn fact(self) -> N {
let seen = get_seen();
while self >= seen.len() {
init_seen(seen);
}
seen[self].0
}
fn inv_fact(self) -> N {
let seen = get_seen();
while self >= seen.len() {
init_seen(seen);
}
seen[self].1
}
}
fn init_seen(seen: &mut Vec<(N, N)>) {
if seen.is_empty() {
seen.push((N::ONE, N::ONE));
}
let mut cur = seen.l().0;
let mut cur_inv = seen.l().1;
for i in seen.len()..seen.len() * 2 {
cur *= N::new(i);
cur_inv /= N::new(i);
seen.push((cur, cur_inv));
}
}
// require mod_int
// to find primitive roots: https://www.wolframalpha.com/input/?i=PrimitiveRoots%5B%5B%2F%2Fnumber%3A998244353%2F%2F%5D%5D
const C: usize = 119;
const ROOT_PW: usize = 1 << 23;
const G: M = M::new(3);
const ROOT: M = G.pow(C);
const ROOT_1: M = ROOT.inv();
type M = Mod;
// from https://cp-algorithms.com/algebra/fft.html#number-theoretic-transform
pub fn fft(a: &mut Vec<M>, invert: bool) {
let n = a.len();
let mut j = 0;
for i in 1..n {
let mut bit = n >> 1;
while j & bit != 0 {
j ^= bit;
bit >>= 1;
}
j ^= bit;
if i < j {
a.swap(i, j);
}
}
let mut len = 2;
while len <= n {
let mut wlen = if invert { ROOT_1 } else { ROOT };
let mut i = len;
while i < ROOT_PW {
wlen *= wlen;
i <<= 1;
}
for i in (0..n).step_by(len) {
let mut w = M::ONE;
for j in 0..len / 2 {
let u = a[i + j];
let v = a[i + j + len / 2] * w;
a[i + j] = u + v;
a[i + j + len / 2] = u - v;
w *= wlen;
}
}
len <<= 1;
}
if invert {
let n_1 = M::new(n).inv();
for x in a {
*x *= n_1;
}
}
}
pub fn multiply(mut fa: Vec<M>, mut fb: Vec<M>) -> Vec<M> {
let n = (fa.len() + fb.len()).next_power_of_two();
fa.resize(n, M::ZERO);
fb.resize(n, M::ZERO);
fft(&mut fa, false);
fft(&mut fb, false);
for (fai, fbi) in fa.iter_mut().zip(fb) {
*fai *= fbi;
}
fft(&mut fa, true);
fa
}
const MOD: usize = 998_244_353;
// const MOD: usize = 1_000_000_007;
type N = Mod;
use std::ops::*;
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Default, Debug)]
pub struct Mod(pub usize);
#[allow(unused)]
impl Mod {
const ZERO: Self = Self::new(0);
const NEG_ONE: Self = Self::newi(-1);
const ONE: Self = Self::new(1);
const TWO: Self = Self::new(2);
const NEG_TWO: Self = Self::newi(-2);
const INV_TWO: Self = Self::new(2).inv();
pub const fn newi(x: isize) -> Self {
Self::new(((x % MOD as isize) + MOD as isize) as usize)
}
fn new2(x: usize) -> Self {
debug_assert!(x < MOD << 1);
if x >= MOD { Self(x - MOD) } else { Self(x) }
}
pub const fn new(x: usize) -> Self {
Self(x % MOD)
}
pub const fn pow(self, mut e: usize) -> Self {
let mut res = Self(1);
let mut cur = self;
while e > 0 {
if e & 1 == 1 {
res = res.fast_mul(cur);
}
cur = cur.fast_mul(cur);
e >>= 1;
}
res
}
pub const fn inv(self) -> Self {
#[cfg(debug_assertions)]
if self.0 == 0 {
panic!("Division by `0`.");
}
self.pow(MOD - 2)
}
pub const fn fast_mul(self, rhs: Self) -> Self {
if MOD < u32::MAX as usize {
Self((self.0 * rhs.0) % MOD)
} else {
Self(((self.0 as u128 * rhs.0 as u128) % MOD as u128) as usize)
}
}
}
impl std::iter::Sum for Mod {
fn sum<I: Iterator<Item = Self>>(iter: I) -> Self {
iter.fold(Self::ZERO, |a, b| a + b)
}
}
impl std::iter::Product for Mod {
fn product<I: Iterator<Item = Self>>(iter: I) -> Self {
iter.fold(Self::ONE, |a, b| a * b)
}
}
impl std::fmt::Display for Mod {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.pad_integral(true, "", &self.0.to_string())
}
}
impl AsRef<usize> for Mod {
fn as_ref(&self) -> &usize {
&self.0
}
}
impl AsMut<usize> for Mod {
fn as_mut(&mut self) -> &mut usize {
&mut self.0
}
}
impl Add for Mod {
type Output = Self;
fn add(self, rhs: Self) -> Self {
Self::new2(self.0 + rhs.0)
}
}
impl Add<usize> for Mod {
type Output = Self;
fn add(self, rhs: usize) -> Self {
self + Self::new(rhs)
}
}
impl Sub for Mod {
type Output = Self;
fn sub(self, rhs: Self) -> Self {
Self::new2(self.0 + MOD - rhs.0)
}
}
impl Sub<usize> for Mod {
type Output = Self;
fn sub(self, rhs: usize) -> Self {
self - Self::new(rhs)
}
}
impl Mul for Mod {
type Output = Self;
fn mul(self, rhs: Self) -> Self {
self.fast_mul(rhs)
}
}
impl Mul<usize> for Mod {
type Output = Self;
fn mul(self, rhs: usize) -> Self {
if MOD < u32::MAX as usize {
Self((self.0 * rhs) % MOD)
} else {
Self(((self.0 as u128 * rhs as u128) % MOD as u128) as usize)
}
}
}
impl Div for Mod {
type Output = Self;
fn div(self, rhs: Self) -> Self {
self * rhs.inv()
}
}
impl Div<usize> for Mod {
type Output = Self;
fn div(self, rhs: usize) -> Self {
self * Mod::new(rhs).inv()
}
}
impl Neg for Mod {
type Output = Self;
fn neg(self) -> Self::Output {
if self.0 == 0 {
self
} else {
Self(MOD - self.0)
}
}
}
impl Neg for &Mod {
type Output = Mod;
fn neg(self) -> Self::Output {
-*self
}
}
macro_rules! impl_mod_rest {
($($trait:ident::$fn:ident:$trait_assign:ident::$fn_assign:ident;)*) => {
$(
impl $trait<&Mod> for Mod {
type Output = Mod;
fn $fn(self, rhs: &Mod) -> Self::Output {
$trait::$fn(self, *rhs)
}
}
impl $trait<&Mod> for &Mod {
type Output = Mod;
fn $fn(self, rhs: &Mod) -> Self::Output {
$trait::$fn(*self, *rhs)
}
}
impl $trait<Mod> for &Mod {
type Output = Mod;
fn $fn(self, rhs: Mod) -> Self::Output {
$trait::$fn(*self, rhs)
}
}
impl $trait<&usize> for Mod {
type Output = Mod;
fn $fn(self, rhs: &usize) -> Self::Output {
$trait::$fn(self, *rhs)
}
}
impl $trait<&usize> for &Mod {
type Output = Mod;
fn $fn(self, rhs: &usize) -> Self::Output {
$trait::$fn(*self, *rhs)
}
}
impl $trait<usize> for &Mod {
type Output = Mod;
fn $fn(self, rhs: usize) -> Self::Output {
$trait::$fn(*self, rhs)
}
}
impl $trait_assign for Mod {
fn $fn_assign(&mut self, rhs: Mod) {
*self = $trait::$fn(*self, rhs);
}
}
impl $trait_assign<&Mod> for Mod {
fn $fn_assign(&mut self, rhs: &Mod) {
*self = $trait::$fn(*self, rhs);
}
}
impl $trait_assign<usize> for Mod {
fn $fn_assign(&mut self, rhs: usize) {
*self = $trait::$fn(*self, rhs);
}
}
impl $trait_assign<&usize> for Mod {
fn $fn_assign(&mut self, rhs: &usize) {
*self = $trait::$fn(*self, rhs);
}
}
)*
};
}
impl_mod_rest! {
Add::add:AddAssign::add_assign;
Sub::sub:SubAssign::sub_assign;
Mul::mul:MulAssign::mul_assign;
Div::div:DivAssign::div_assign;
}
pub fn lcm(u: U, v: U) -> U {
u * v / gcd(u, v)
}
// gcd from wiki: https://en.wikipedia.org/wiki/Binary_GCD_algorithm
pub fn gcd(mut u: U, mut v: U) -> U {
if u == 0 {
return v;
} else if v == 0 {
return u;
}
let i = u.trailing_zeros();
u >>= i;
let j = v.trailing_zeros();
v >>= j;
let k = std::cmp::min(i, j);
loop {
if u > v {
std::mem::swap(&mut u, &mut v);
}
v -= u;
if v == 0 {
return u << k;
}
v >>= v.trailing_zeros();
}
}
/// Slower but more robust RNG
pub mod rng2 {
pub struct Rng {
seed: u64,
}
impl Rng {
pub fn new() -> Self {
Self {
seed: std::time::SystemTime::now()
.duration_since(std::time::SystemTime::UNIX_EPOCH)
.unwrap()
.as_nanos() as u64,
}
}
pub fn next(&mut self) -> u64 {
use std::num::Wrapping;
let mut x = Wrapping(self.seed);
x += Wrapping(0x9e3779b97f4a7c15);
x = (x ^ (x >> 30)) * Wrapping(0xbf58476d1ce4e5b9);
x = (x ^ (x >> 27)) * Wrapping(0x94d049bb133111eb);
x ^= x >> 31;
self.seed = x.0;
return self.seed;
}
pub fn next_u64(&mut self) -> u64 {
self.next()
}
pub fn next_u128(&mut self) -> u128 {
let (l, u) = (self.next_u64(), self.next_u64());
((l as u128) << 64) | u as u128
}
pub fn next_u(&mut self) -> usize {
self.next_u64() as usize
}
}
}
pub struct Rng {
seed: u32,
}
impl Rng {
pub fn new() -> Self {
Self {
seed: std::time::SystemTime::now()
.duration_since(std::time::SystemTime::UNIX_EPOCH)
.unwrap()
.subsec_nanos(),
}
}
pub fn next(&mut self) -> u32 {
self.seed ^= self.seed << 13;
self.seed ^= self.seed >> 17;
self.seed ^= self.seed << 5;
self.seed
}
pub fn next_u64(&mut self) -> u64 {
let (l, u) = (self.next(), self.next());
((l as u64) << 32) | u as u64
}
pub fn next_u128(&mut self) -> u128 {
let (l, u) = (self.next_u64(), self.next_u64());
((l as u128) << 64) | u as u128
}
pub fn next_u(&mut self) -> usize {
self.next_u64() as usize
}
}
pub trait IterExt<T> {
fn n(&mut self) -> T;
fn cv(self) -> Vec<T>;
}
impl<T, I: Iterator<Item = T>> IterExt<T> for I {
fn cv(self) -> Vec<T> {
self.collect()
}
fn n(&mut self) -> T {
self.next().unwrap()
}
}
pub trait IterExt2<T: PartialOrd> {
fn mn(self) -> T;
fn mx(self) -> T;
}
impl<T: PartialOrd, I: IntoIterator<Item = T>> IterExt2<T> for I {
fn mn(self) -> T {
self.into_iter()
.min_by(|a, b| a.partial_cmp(b).unwrap())
.unwrap()
}
fn mx(self) -> T {
self.into_iter()
.max_by(|a, b| a.partial_cmp(b).unwrap())
.unwrap()
}
}
pub trait IterExt3<T: ToString> {
fn to_string(self, sep: &str) -> String;
}
impl<T: ToString, I: IntoIterator<Item = T>> IterExt3<T> for I {
fn to_string(self, sep: &str) -> String {
self.into_iter().map(|x| x.to_string()).cv().join(sep)
}
}
pub trait IterExt4<T: Clone> {
fn l(&self) -> T;
}
impl<T: Clone> IterExt4<T> for Vec<T> {
fn l(&self) -> T {
self[self.len() - 1].clone()
}
}
impl<T: Clone> IterExt4<T> for &[T] {
fn l(&self) -> T {
self[self.len() - 1].clone()
}
}
impl<T: Clone> IterExt4<T> for &mut [T] {
fn l(&self) -> T {
self[self.len() - 1].clone()
}
}
pub use lib::*;
#[allow(unused)]
mod lib {
pub type U = usize;
pub type I = isize;
pub type F = f64;
pub use std::cmp::{Ordering, Reverse};
pub use std::collections::*;
pub use std::f64::consts::*;
pub use std::fmt::Write;
use std::io::*;
use std::iter::Filter;
use std::str::{Split, SplitWhitespace};
pub struct Output(String);
impl AsRef<String> for Output {
fn as_ref(&self) -> &String {
&self.0
}
}
impl AsMut<String> for Output {
fn as_mut(&mut self) -> &mut String {
&mut self.0
}
}
static mut OUTPUT: Output = Output(String::new());
#[inline(always)]
#[allow(static_mut_refs)]
pub fn output() -> &'static mut Output {
unsafe { &mut OUTPUT }
}
#[macro_export]
macro_rules! w {
() => {{
let _ = writeln!(output().as_mut());
}};
($t:expr) => {{
let _ = writeln!(output().as_mut(), "{}", $t);
}};
($$$start:expr $$$(,$$$t:expr)* $$$(,)?) => {{
let o = output().as_mut();
let _ = write!(o, "{}", $start);
$(
let _ = write!(o, " {}", $t);
)*
let _ = writeln!(o);
}};
}
type InputInner = (Split<'static, &'static [char]>, String);
pub struct Input(InputInner); // must drop both at the same time (or the iterator first)
impl Input {
#[inline(always)]
pub fn get_next(&mut self) -> &'static str {
loop {
if let Some(n) = self.0.0.next() {
if !n.is_empty() {
return n;
}
} else {
self.0 = read_next_line();
}
}
}
}
const SKIP_CHARS: [char; 5] = [' ', '\n', '\r', '\t', ','];
fn read_next_line() -> InputInner {
let mut s = String::new();
stdin().read_line(&mut s).unwrap();
let ss: &'static mut str = unsafe { std::mem::transmute(s.as_mut_str()) };
(ss.split(&SKIP_CHARS), s)
}
static mut INPUT: Option<Input> = None;
#[inline(always)]
#[allow(static_mut_refs)]
pub fn input() -> &'static mut Input {
unsafe { INPUT.as_mut().unwrap_unchecked() }
}
pub trait Parse {
type ParsesTo;
#[inline(always)]
fn parse(i: &mut Input) -> Self::ParsesTo;
}
pub struct IO;
impl Drop for IO {
fn drop(&mut self) {
print!("{}", output().0);
}
}
macro_rules! impl_parse {
($($t:tt),*) => {
$(
impl Parse for $t {
type ParsesTo = Self;
#[inline(always)]
fn parse(i: &mut Input) -> Self::ParsesTo {
<Self as std::str::FromStr>::from_str(&i.get_next()).unwrap()
}
}
)*
};
}
impl_parse!(
bool, u8, i8, char, u16, i16, u32, i32, f32, u64, i64, f64, usize, isize, u128, i128
);
impl Parse for String {
type ParsesTo = Vec<char>;
fn parse(i: &mut Input) -> Self::ParsesTo {
i.get_next().chars().collect()
}
}
impl<T: Parse<ParsesTo = U>, U> Parse for Vec<T> {
type ParsesTo = Vec<U>;
fn parse(i: &mut Input) -> Self::ParsesTo {
let n = usize::parse(i);
(0..n).map(|_| T::parse(i)).collect()
}
}
#[inline(always)]
pub fn lin<T: Parse<ParsesTo = U>, U, V: FromIterator<U>>(n: usize) -> V {
let i = input();
(0..n).map(|_| T::parse(i)).collect()
}
#[inline(always)]
pub fn grid<T: Parse<ParsesTo = U>, U, V: FromIterator<U>, W: FromIterator<V>>(
n: usize,
m: usize,
) -> W {
(0..n).map(|_| lin::<T, U, V>(m)).collect()
}
#[macro_export]
macro_rules! rv {
($n:expr; $c:tt<$t:tt>) => {
lin::<$t, _, $c<_>>($n)
};
($n:expr, $m:expr; $c1:tt<$c2:tt<$t:tt>>) => {
grid::<$t, _, $c2<_>, $c1<$c2<_>>>($n, $m)
};
}
#[macro_export]
macro_rules! r {
(<p>Unable to parse markup [type=CF_MATHJAX]</p>(,)?) => {{
let i = input();
($(<$t as Parse>::parse(i)),*)
}};
}
pub fn io() -> IO {
let (mut i, mut o) = (Some(Input(read_next_line())), Output(String::new()));
unsafe {
INPUT = i;
OUTPUT = o; // fairly useless, can be removed; currently only "resets" output string
// (which should be a noop)
}
IO
}
}









Orz round
These are good problems. Thanks for sharing your upsolving, bro!
Great round, I really enjoyed E and F!
E can be solved in $$$\mathcal O(n)$$$:
(The approach in the editorial can be optimized AFAIK, here's my (different) approach)
Let's assume that we can delete the full tree and let's fix any vertex $$$u$$$ (and it's subtree). Then we observe that, in order to delete the subtree of $$$u$$$ (including $$$u$$$), we have two cases:
1) all deletion operations to delete the subtree have both endpoints in the subtree of $$$u$$$. in particular, this will be the case for the root $$$1$$$. 2) all but one deletion operation has both endpoints in the subtree of $$$u$$$. Then, there is one more operation that has one endpoint in some subtree of $$$u$$$ and the other endpoint is outside of $$$u$$$. We denote this endpoint by $$$X_u$$$. Note that as long as we can delete all other vertices, it doesn't matter where the endpoint (inside the subtree of $$$u$$$) is.
This motivates the following "tree-dp"/"dfs" — for every vertex, we store a sequence of operation to achieve case 1 or case 2:
Now, let $$$k$$$ denote the number of children for which case 1 is possible but case 2 isn't. Then, there are multiple cases:
The final answer is the "case 1" sequence for the root (if it exists). Note that we have already shown correctness in the beginning.
This can be easily implemented in $$$\mathcal O(n^2)$$$. If we precompute the optimal actions, we can use Linked Lists to merge the actions, resulting in a runtime of $$$\mathcal O(n)$$$.
My submission.