2139A — Maple and Multiplication
Idea: installb Preparation: installb,2014CAIS01
If $$$a=b$$$ we need $$$0$$$ steps.
If $$$a$$$ can be divided by $$$b$$$, we only need $$$1$$$ step, which is to multiply $$$b$$$ by $$$\frac{a}{b}$$$. Same for the case when $$$b$$$ can be divided by $$$a$$$.
Otherwise, we need at most $$$2$$$ steps. As both $$$a$$$ and $$$b$$$ are positive integers, we can first multiply $$$a$$$ by $$$b$$$ and then multiply $$$b$$$ by the original value of $$$a$$$. Both number will be equal to $$$a\times b$$$.
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
int solve(){
int a,b;
cin >> a >> b;
if(a == b) return 0;
if(a % b == 0 || b % a == 0) return 1;
return 2;
}
int main(){
ios::sync_with_stdio(false);
int TC;
cin >> TC;
while(TC --){
cout << solve() << '\n';
}
return 0;
}
Idea: installb Preparation: installb
The number of cakes collected from an oven depends only on the last time Maple visits it. Therefore, in the optimal strategy, she should visit distinct ovens in the last $$$\min(m,n)$$$ seconds.
Let's think in reverse: suppose we fix an order $$$p_1,\dots,p_n$$$ of the ovens. Then at second $$$m$$$ we visit oven $$$p_1$$$, at second $$$m-1$$$ we visit oven $$$p_2$$$, and so on. For the $$$i$$$-th oven in this order, we collect $$$a_{p_i}\cdot \max(0, m-i+1)$$$ cakes.
To maximize the total number of cakes collected, ovens with larger $$$a_i$$$ should be visited later (i.e., assigned larger multipliers). Thus we sort $$$a$$$ in non-increasing order first.
The final answer is $$$\sum_{i=1}^n a_i \cdot \max(0, m-i+1)$$$.
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const int INF = 0x3f3f3f3f;
void solve(){
int n,t;
cin >> n >> t;
ll ans = 0;
vector <int> a(n);
for(int i = 0;i < n;i ++) cin >> a[i];
sort(a.begin(),a.begin() + n,greater <int>());
for(int i = 0;i < n;i ++) ans += 1ll * a[i] * max(0,t - i);
cout << ans << '\n';
}
int main(){
ios::sync_with_stdio(false);
int TC;
cin >> TC;
while(TC --){
solve();
}
return 0;
}
Idea: wjy666 Preparation: wjy666
You can backtrack from the final state.
Denote the state $$$(a,b)$$$ that indicates Chocola has $$$a$$$ cakes and Vanilla has $$$b$$$ cakes.
After one operation of type $$$1$$$, Vanilla's number of cakes will definitely be at least half of the total, and similarly for operation $$$2$$$. If $$$0 \lt a \lt 2^{k}$$$, then $$$2^{k} \lt b \lt 2^{k+1}$$$, meaning the previous operation must have been type $$$1$$$. Similarly, if $$$0 \lt b \lt 2^{k}$$$, the previous operation must have been type $$$2$$$.
Therefore, you can backtrack from the final state $$$(x,2^{k+1}-x)$$$, until it reaches the initial state $$$(2^k,2^k)$$$.
Now let's analyze how many steps we need to do during the process.
For the state $$$(a, b)$$$, where $$$a, b \neq 0$$$, assume $$$a = i \cdot 2^{p_a}$$$ and $$$b = j \cdot 2^{p_b}$$$, $$$i$$$ and $$$j$$$ are positive odd integers.
Since $$$a + b = 2^{k+1}$$$, it must be true that $$$ 0 \le p_a = p_b \le k$$$ , and after one operation $$$1$$$ or $$$2$$$, both $$$p_a$$$ and $$$p_b$$$ decrease by exactly $$$1$$$.
Therefore, the whole process takes at most $$$k$$$ steps, so we solve the problem with $$$O(tk)$$$ time complexity.
#include <bits/stdc++.h>
using namespace std;
int main(){
int t; cin>>t;
while(t--){
long long k,x,kk; cin>>k>>x; kk=1ll<<k;
if (!x||x==kk*2) {cout<<"-1\n"; continue;}
long long y=kk*2-x;
vector<int> ans; ans.clear();
while(x!=kk){
if (x>y) ans.push_back(2),x-=y,y*=2;
else ans.push_back(1),y-=x,x*=2;
}
cout<<ans.size()<<"\n";
while(!ans.empty()) cout<<ans.back()<<' ',ans.pop_back();
cout<<"\n";
}
return 0;
}
2138B — Antiamuny Wants to Learn Swap
Idea: tarjen Preparation: installb,tarjen,2014CAIS01
For array $$$b$$$, we can prove that $$$f(b)\neq g(b)$$$ if and only if there exist indexes $$$1\le i \lt j \lt k\le m$$$ which satisfy $$$b_i \gt b_j \gt b_k$$$.
Since $$$a$$$ is a permutation, all elements in $$$b$$$ should be distinct. $$$g(b)$$$ is equal to the number of inversions in $$$b$$$. When there exists some index $$$i$$$ that satisfies $$$a_i \gt a_{i+1} \gt a_{i+2}$$$, we can reduce the number of inversions by $$$3$$$ by applying operation $$$2$$$, allowing $$$f(b)$$$ to become less than $$$g(b)$$$. This is the only case where you can reduce more than $$$1$$$ inversion with one operation.
If there exist indexes $$$1\le i \lt j \lt k\le m$$$ which satisfy $$$b_i \gt b_j \gt b_k$$$, we can first choose indexes $$$i,k$$$, then move all elements greater than $$$b_i$$$ to the right of $$$b_k$$$ and move all elements less than $$$b_k$$$ to the left of $$$b_i$$$ by swapping adjacent numbers (applying the first operation). Now all elements $$$b_p$$$ between $$$b_i,b_k$$$ satisfy $$$b_i \gt b_p \gt b_k$$$. We then swap them to the left of $$$b_i$$$ until there is only one element $$$b_j$$$ left. Now $$$b_i \gt b_j \gt b_k$$$ and they are adjacent, and we can apply operation $$$2$$$ on the current index of $$$b_i$$$.
If we only apply operation $$$1$$$ on indexes $$$i$$$ which satisfy $$$b_i \gt b_{i+1}$$$, there will never exist any index $$$i$$$ satisfying $$$b_i \gt b_{i+1} \gt b_{i+2}$$$ in $$$b$$$ during the process. If we apply an operation $$$1$$$ on some index $$$i$$$ where $$$b_i \lt b_{i+1}$$$, the number of inversions will be increased by $$$1$$$ instead of decreased by $$$1$$$, which makes it impossible to reach $$$f(b) \lt g(b)$$$ with one operation $$$2$$$, even if it decreases the number of inversions by $$$3$$$. Therefore, in this case $$$f(b)=g(b)$$$.
For each element $$$a_i$$$, we define:
- $$$l_i$$$: the maximum index $$$ \lt i$$$ such that $$$a_{l_i} \gt a_i$$$,
- $$$r_i$$$: the minimum index $$$ \gt i$$$ such that $$$a_{r_i} \lt a_i$$$.
Then the answer to a query $$$[l, r]$$$ is $$$\texttt{NO}$$$ if and only if some segment $$$[l_i, r_i]$$$ lies fully inside $$$[l, r]$$$.
To answer multiple queries, you can preprocess the smallest $$$r$$$ for every $$$l$$$ such that the answers for all $$$[l,x](x\ge r)$$$ should be $$$\texttt{NO}$$$.
The time complexity is $$$O(n)$$$ if you find $$$l_i,r_i$$$ with the Monotonic Stack trick. $$$O(n\log n)$$$ solutions can also pass.
#include <bits/stdc++.h>
using namespace std;
void solve()
{
int n, q;
cin >> n >> q;
vector<int> a(n + 2);
for (int i = 1; i <= n; i++)
{
cin >> a[i];
}
vector<int> mxl(n + 1), mir(n + 1);
for (int i = 1; i <= n; i++)
{
mxl[i] = i - 1;
while (mxl[i] > 0 && a[mxl[i]] < a[i])
mxl[i] = mxl[mxl[i]];
}
for (int i = n; i >= 1; i--)
{
mir[i] = i + 1;
while (mir[i] <= n && a[mir[i]] > a[i])
mir[i] = mir[mir[i]];
}
vector<int> L(n + 1, 0);
for (int i = 2; i < n; i++)
{
if (mxl[i] > 0 && mir[i] <= n)
{
L[mir[i]] = max(L[mir[i]], mxl[i]);
}
}
for (int i = 1; i <= n; i++)
{
L[i] = max(L[i], L[i - 1]);
}
for (int i = 0; i < q; i++)
{
int l, r;
cin >> l >> r;
if (l <= L[r])
cout << "NO\n";
else
cout << "YES\n";
}
}
int main()
{
ios::sync_with_stdio(false);
cin.tie(0);
int T;
cin >> T;
while (T--)
{
solve();
}
}
2138C1 — Maple and Tree Beauty (Easy Version)
Idea: installb Preparation: installb,tarjen,2014CAIS01
We can first find out that the maximum possible answer is the minimum depth among all leaves $$$d=\min{dep_u}$$$, as the LCS can't be greater than the minimum length of all names.
We consider a special case first where all leaves have the same depth. If we want to achieve the maximum answer $$$d$$$, the names of all leaves should be exactly the same. Which means, for any two vertices with the same depth, their labels has to be the same.
For each depth $$$i$$$, we group all vertices with depth $$$i$$$ together as the $$$i$$$-th group (suppose root has depth $$$1$$$). For each group, all vertices in the group should have the same label. Let the size of groups $$$1,\cdots,d$$$ be $$$c_1,\cdots,c_d$$$. Now the problem can be considered as a knapsack problem, where you have $$$d$$$ items with weight $$$c_1,\cdots,c_d$$$. We need to find out whether you can take several numbers from $$$c$$$ that have a sum equal to $$$k$$$. This can be solved in $$$O(n^2)$$$.
If the maximum answer $$$d$$$ is not achievable, we can show that you can always achieve $$$d-1$$$. For each group from the first to the $$$d$$$-th, we assign an arbitrary label $$$0$$$ or $$$1$$$ which still has at least $$$c_i$$$ numbers remaining to all vertices in the group. As all leaves have the same depth, $$$c_d$$$ should be the maximum element in the array, so you are always able to select a label for groups $$$1$$$ to $$$d-1$$$. Then arbitrarily distribute the remaining labels among vertices in the last groups (which are leaves).
Now consider the general case where not all leaves have the same depth. If we want to achieve answer $$$=d$$$, we need to select $$$d$$$ groups. For each leaf vertex $$$v$$$, there should exist a vertex in the $$$d$$$-th group which is an ancestor of $$$v$$$. And for every $$$2\le i\le d$$$, for each vertex $$$v$$$ in the $$$i$$$-th group, there should exist a vertex in the $$$i-1$$$-th group which is an ancestor of $$$v$$$. All vertices in the same group should have the same label. Suppose the $$$i$$$-th group has label $$$l_i$$$, then we can achieve $$$\text{LCS}=l_1l_2\ldots l_d$$$ which has length $$$d$$$.
We can still group all vertices with depth $$$1\le i\le d$$$ together as the $$$i$$$-th group. For all vertices with depth $$$ \gt d$$$, we can set their label arbitrarily. We can view them as items with weight $$$1$$$ in the knapsack problem.
And we can show this is the optimal way to select $$$d$$$ groups. If some vertex $$$u$$$ with depth $$$\le d$$$ is not selected, there must be some vertices selected in some group in the subtree of $$$u$$$, because the path from the root to $$$u$$$ consists of fewer than $$$d$$$ vertices. If the smallest group index that some vertex in the subtree of $$$u$$$ belongs to is $$$i$$$, we can instead let $$$u$$$ be inside the $$$i$$$-th group, and then the labels of all vertices in the subtree of $$$u$$$ that were previously in group $$$i$$$ can be selected arbitrarily. If we consider the knapsack problem, we have reduced some $$$c_i$$$ by $$$x$$$ and added $$$x$$$ items with weight $$$1$$$. All achievable values still remain achievable, while some previously unachievable values become achievable.
The time complexity is $$$O(n^2)$$$, the bottleneck is knapsack.
// n^2/w
#include <bits/stdc++.h>
using namespace std;
int solve() {
int n, k;
cin >> n >> k;
vector<int> fa(n, -1), dep(n, 0);
vector<int> noson(n, 1), cnt(n + 1);
cnt[0]++;
for (int i = 1; i < n; i++) {
cin >> fa[i], fa[i]--;
dep[i] = dep[fa[i]] + 1;
cnt[dep[i]]++;
noson[fa[i]] = 0;
}
int mxdep = 1e9;
for (int i = 0; i < n; i++)
if (noson[i]) mxdep = min(mxdep, dep[i]);
int sum = 0;
vector<int> dp(n + 1);
dp[0] = 1;
for (int i = 0; i <= mxdep; i++) {
for (int j = sum; j >= 0; j--) dp[j + cnt[i]] |= dp[j];
sum += cnt[i];
}
if (sum <= k || sum <= n - k) return mxdep + 1;
for (int i = 0; i <= sum; i++)
if (dp[i]) {
if (i <= k && sum - i <= n - k) return mxdep + 1;
}
return mxdep;
}
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
int T;
cin >> T;
while (T--) cout << solve() << "\n";
return 0;
}
2138C2 — Maple and Tree Beauty (Hard Version)
Idea: installb Preparation: installb,tarjen,2014CAIS01
We focus on optimizing the knapsack problem. In the knapsack problem we only need to know $$$f_i=0/1$$$ which means whether you can achieve sum $$$i$$$, so we can optimize the knapsack with bitset. The time complexity is $$$O(\frac{n^2}{\omega})$$$ which is enough to pass E2.
But actually we can do better. For the knapsack problem, it has at most $$$n$$$ items, and the sum of weights of all items is also at most $$$n$$$. We can do a sqrt decomposition trick here.
- For items with weight $$$\ge\sqrt{n}$$$, there are at most $$$\sqrt{n}$$$ such items.
- For items with weight $$$ \lt \sqrt{n}$$$, we count the number of items for each different weight. If there are $$$c_w$$$ items for weight $$$w$$$, we decompose $$$x$$$ into $$$c_w=2^0+2^1+\cdots+2^k+y$$$ where $$$k$$$ is the largest integer satisfying $$$2^0+2^1+\ldots+2^k\le c_w$$$. Then we create new items with weights $$$2^0\cdot w,2^1\cdot w,\ldots,2^k\cdot w,y\cdot w$$$. The set of new items is the same as $$$c_w$$$ items with weight $$$w$$$ if we only consider the different sum of weights the set of items can achieve. Now we only have $$$\sum_{w=1}^{\sqrt{n}}\log(c_w)=\sqrt{n}$$$ items.
The total time complexity is $$$O(\sqrt{n}\cdot n+\sqrt{n}\cdot n)=O(n\sqrt{n})$$$.
And this can also be optimized with bitset, leading to an $$$O(\frac{n\sqrt{n}}{\omega})$$$ solution.
// nsqrtn/w
#include <bits/stdc++.h>
using namespace std;
const int N = 2e5 + 10;
template <int maxn>
int solve(int n, int k) {
if (maxn <= n) {
return solve<min(N, maxn * 2)>(n, k);
}
bitset<maxn> dp;
vector<int> fa(n, -1), dep(n, 0);
vector<int> noson(n, 1), cnt(n + 1);
cnt[0]++;
for (int i = 1; i < n; i++) {
cin >> fa[i], fa[i]--;
dep[i] = dep[fa[i]] + 1;
cnt[dep[i]]++;
noson[fa[i]] = 0;
}
dp.reset();
dp[0] = 1;
int mxdep = 1e9;
for (int i = 0; i < n; i++)
if (noson[i]) mxdep = min(mxdep, dep[i]);
int sum = 0;
vector<int> all(cnt.begin(), cnt.begin() + mxdep + 1);
sort(all.begin(), all.end());
vector<int> v;
for (int i = 0; i < (int)all.size(); i++) {
int j = i;
while (j + 1 < (int)all.size() && all[i] == all[j + 1]) j++;
int t = j - i + 1;
for (int z = 1; z <= t; z *= 2) {
v.push_back(z * all[i]);
t -= z;
}
if (t > 0) v.push_back(t * all[i]);
i = j;
}
dp[0] = 1;
for (auto val : v) {
sum += val;
dp |= (dp << val);
}
if (sum <= k || sum <= n - k) return mxdep + 1;
for (int i = 0; i <= sum; i++)
if (dp[i]) {
if (i <= k && sum - i <= n - k) return mxdep + 1;
}
return mxdep;
}
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
int T;
cin >> T;
while (T--) {
int n, k;
cin >> n >> k;
cout << solve<1>(n, k) << "\n";
}
return 0;
}
2139D — Antiamuny and Slider Movement
Idea: tarjen,StarSilk Preparation: installb,tarjen,StarSilk
Let’s define $$$b_i$$$ as the position of the $$$i$$$-th slider decreased by $$$i$$$. Every operation on the sliders can be interpreted as a modification of the array $$$b$$$.
If we increase the position of slider $$$i$$$, then $$$b_i$$$ increases. At the same time, every suffix element $$$b_{i+1}, b_{i+2}, \dots, b_n$$$ gets updated to $$$b_j := \max(b_j, b_i)$$$.
If we decrease the position of slider $$$i$$$, then $$$b_i$$$ decreases. At the same time, every prefix element $$$b_1, b_2, \dots, b_{i-1}$$$ gets updated to $$$b_j := \min(b_j, b_i)$$$.
Since the array $$$b$$$ always remains non-decreasing, we can reinterpret operations as follows:
- Every operation simultaneously applies a $$$\min$$$ operation to all elements on the left, and a $$$\max$$$ operation to all elements on the right.
For a fixed slider $$$b_i$$$, the effect of each operation on the slider should be one of the following:
- $$$b_i := \max(b_i, x)$$$
- $$$b_i := \min(b_i, x)$$$
- $$$b_i := x$$$ (assignment)
Our task is to compute the sum of final values of $$$b_i$$$ after all possible sequences of operations.
We sort all operations by their parameter $$$x$$$. If multiple operations share the same $$$x$$$, we give them a consistent order and then assume all $$$x$$$ are distinct in the following part of the editorial.
We first suppose that the position of the slider is changed at least once. Suppose the final value for slider $$$i$$$ is $$$b_e$$$, let’s consider which operation sets $$$b_i = b_e$$$ at the very end.
At this point, we no longer care about the exact value of $$$b_i$$$ during the process — only whether it is less than, equal to, or greater than $$$b_e$$$.
All operations of type $$$\max$$$ with $$$x \lt b_e$$$ and type $$$\min$$$ with $$$x \gt b_e$$$ are irrelevant. We can insert them anywhere after deciding the order of other operations without affecting the result. We won't consider these operations in the following parts of the editorial.
All assignment operations $$$b_i := x$$$ can be reinterpreted:
- If $$$x \lt b_e$$$, treat it as $$$b_i := \min(b_i, x)$$$.
- If $$$x \gt b_e$$$, treat it as $$$b_i := \max(b_i, x)$$$.
The last operation, that makes $$$b_i = b_e$$$ must be the one with parameter $$$x = b_e$$$.
If this operation is assignment, then all other operations can appear in any order.
If this operation is $$$\max$$$ type, then immediately before it we must have $$$b_i \lt b_e$$$.
-
- Either the last preceding valid operation is some $$$\min$$$ type operation making $$$b_i \lt b_e$$$;
-
- Or no other effective operation occurs, and the initial value of $$$b_i$$$ was already smaller than $$$b_e$$$.
If this operation is $$$\min$$$ type operation, it is symmetric: either the last preceding valid operation is a $$$\max$$$ type operation, or the initial value of $$$b_i$$$ was already larger than $$$b_e$$$.
Finally, don’t forget the case where a slider is never touched, where it simply keeps its initial value.
For each of the $$$n$$$ sliders, we need to check contributions over all $$$m$$$ operations. The overall time complexity is $$$O(nm)$$$.
#include<bits/stdc++.h>
using namespace std;
const long long mod=1000000007;
long long a[5000],ans[5000],tt[5001],inv[5001];
struct apos{
long long id;
long long x;
friend bool operator<(apos a,apos b){
return a.x<b.x;
}
}ap[5000];
int main(){
ios::sync_with_stdio(false),cin.tie(0);
int T,flag;
long long n,m,p,q,x,i,j,cl,cr;
tt[0]=1;
for(i=1;i<=5000;i++)tt[i]=tt[i-1]*i%mod;
inv[1]=1;
for(i=2;i<=5000;i++)inv[i]=(mod-mod/i)*inv[mod%i]%mod;
for(cin>>T;T>0;T--)
{
cin>>n>>m>>p;
for(i=0;i<n;i++)
{
cin>>a[i];
a[i]-=i;
}
for(i=0;i<p;i++)
{
cin>>ap[i].id>>ap[i].x;
ap[i].id--;
ap[i].x-=ap[i].id;
}
sort(ap,ap+p);
for(i=0;i<n;i++)
{
flag=0;
for(j=0;j<p;j++)
{
if(ap[j].id<=i&&ap[j].x>=a[i])flag=1;
if(ap[j].id>=i&&ap[j].x<a[i])flag=1;
}
if(!flag)
{
ans[i]=a[i]+i;
continue;
}
ans[i]=0;
cl=0;
cr=0;
for(j=0;j<p;j++)
{
if(ap[j].id<=i)cr++;
}
for(j=0;j<p;j++)
{
if(ap[j].id<=i)cr--;
if(ap[j].id==i)ans[i]=(ans[i]+(ap[j].x+i)*inv[cl+cr+1])%mod;
if(ap[j].id<i)
{
if(cl==0&&cr==0&&a[i]<=ap[j].x)ans[i]=(ans[i]+ap[j].x+i)%mod;
else ans[i]=(ans[i]+(ap[j].x+i)*inv[cl+cr+1]%mod*inv[cl+cr]%mod*cl)%mod;
}
if(ap[j].id>i)
{
if(cl==0&&cr==0&&a[i]>ap[j].x)ans[i]=(ans[i]+ap[j].x+i)%mod;
else ans[i]=(ans[i]+(ap[j].x+i)*inv[cl+cr+1]%mod*inv[cl+cr]%mod*cr)%mod;
}
if(ap[j].id>=i)cl++;
}
}
for(int x = 0;x < n;x ++)
{
cout<<ans[x]*tt[p]%mod<<' ';
}
cout<<'\n';
}
return 0;
}
2139E1 — Determinant Construction (Easy Version)
Idea: wjy666 Preparation: wjy666
Thanks to EvenImage for his significant contribution to this solution.
We can consider transforming the construction of a matrix into the construction of a directed graph, as the determinant and permutations are closely related to the cycle covers of a directed graph.
If we have a directed graph with $$$n$$$ vertices, a cycle cover is a set of $$$n$$$ edges such that each vertex has exactly one incoming edge and one outgoing edge in this set (self-loops are allowed). In other words, these $$$n$$$ edges form one or more cycles that together cover all vertices in the graph.
In a complete directed graph (with self-loops), every permutation of the vertices determines a cycle cover and vice versa. (The cycles of the permutation are precisely the cycles in the cover.)
We consider the definition of the determinant:
This summation goes over all permutations $$$B$$$, in other words, over all cycle covers of the directed graph determined by the adjacency matrix $$$M$$$. We can also interpret $$$m_{i,j}=0$$$ as "the edge from $$$i$$$ to $$$j$$$ does not exist at all". Thus, the product term $$$\prod_{i=0}^{n-1}m_ {i,b_{i}}$$$ is non-zero only if all $$$m_ {i,b_{i}} \neq 0$$$, which corresponds to the existence of a cycle cover in the graph.
We want to construct a directed graph such that the sum of the weights of its cycle covers equals $$$x$$$. Since the current problem only allows the use of $$$-1, 0, 1$$$, the edge weights can only be $$$1$$$ or $$$-1$$$, and the contribution of a valid cycle cover to the answer can only be $$$1$$$ or $$$-1$$$.
PS: You can refer to pages 56-58 of https://ipsc.ksp.sk/2012/real/solutions/booklet.pdf for more information.
There might be many ways to construct a legal directed graph. We will demonstrate a method that transforms the problem into constructing a DAG with exactly $$$x$$$ paths from source vertex $$$1$$$ to sink vertex $$$n$$$.
For a directed acyclic graph $$$G$$$ with source vertex $$$1$$$ and sink vertex $$$n$$$, we can construct the following adjacency matrix:
- $$$g_{i,j} = -1$$$ if there is an edge $$$i \rightarrow j$$$ in $$$G$$$. Set the weight of all original edges in $$$G$$$ to $$$-1$$$.
- $$$g_{i,i} = 1$$$ for all $$$1 \lt i \lt n$$$. Add a self-loop with weight $$$1$$$ to all vertices in $$$G$$$ except the source and sink.
- $$$g_{n,1} = 1$$$. Add an edge from the sink to the source with weight $$$1$$$.
- Everything else $$$= 0$$$.
We analyze all cycle covers of this new graph. Due to the presence of the edge $$$n \rightarrow 1$$$ , any cycle cover must contain a main cycle that includes nodes $$$1$$$ and $$$n$$$, formed by a path $$$P$$$ from $$$1$$$ to $$$n$$$ in $$$G$$$ and the edge $$$n \rightarrow 1$$$. For nodes not on path $$$P$$$ (i.e., intermediate nodes not covered by path $$$P$$$), they must be covered by self-loops with weight $$$1$$$. Therefore, each cycle cover uniquely corresponds to a path $$$P$$$ from $$$1$$$ to $$$n$$$.
Since there are $$$x$$$ paths in $$$G$$$, we only need to ensure that the contribution of each cycle cover is $$$1$$$ to obtain a determinant value of $$$x$$$. Next, we will analyze how to achieve this by setting the weight of all original edges in $$$G$$$ to $$$-1$$$.
For each cycle cover, first consider the product of all edge weights:
- The main cycle has $$$k$$$ edges, among which $$$k-1$$$ edges come from path $$$P$$$ (each with weight $$$-1$$$), and one edge is $$$n \rightarrow 1$$$ (with weight $$$1$$$). Thus, the product term is $$$(-1)^{k-1}$$$.
- The weights of all other self-loops are $$$1$$$.
Therefore, the product of all edge weights is $$$(-1)^{k-1}$$$.
Then we consider the sign term $$$(-1)^{\mathrm{inv}(B)}$$$. In fact, the parity of the number of inversions $$$\mathrm{inv}(B)$$$ in the permutation is consistent with the parity of the number of even cycles in the permutation. The main cycle is a cycle of length $$$k$$$, and the self-loops are all cycles of length $$$1$$$. Therefore, the sign term is $$$(-1)^{k-1}$$$.
Thus, the contribution of each cycle cover is $$$(-1)^{k-1} \cdot (-1)^{k-1}=1$$$.
Due to the constraints in the problem that the matrix size is at most $$$80$$$ and there are at most $$$3$$$ non-zero entries per row and column, we need to construct a DAG that satisfies:
- Exactly $$$x$$$ paths from source vertex $$$1$$$ to sink vertex $$$n$$$.
- At most $$$80$$$ nodes.
- The in-degree of each node $$$\le 2$$$ and the out-degree of each node $$$\le 2$$$.
Since $$$80$$$ is a loose boundary, there might be many constructions that satisfy it. We will introduce a construction method that encodes $$$x$$$ in base three.
The basic unit shown in the figure can form a base-three digit because the number of paths from $$$1$$$ to $$$3$$$ is $$$2$$$, and from $$$1$$$ to $$$4$$$ is $$$3$$$. Node $$$4$$$ can be connected to the next unit, forming a chain of units, where each unit corresponds to a base-three digit. Nodes $$$3$$$ and $$$4$$$ have one free out-degree, which can contribute to the final answer.

The example figure below shows how to construct $$$22=1+3+2 \cdot 3^2$$$ in this way. There are $$$\log_3 n$$$ basic units, each with $$$4$$$ nodes. The answer chain has at most $$$\log_3 n$$$ nodes. Therefore, the total number of nodes is $$$5\log_3 n + O(1)$$$, which can pass the problem.

#include<bits/stdc++.h>
using namespace std;
const int N=81;
int a[N][N];
vector<pair<int,int>> e;
void add(int x,int y){
e.push_back({x,y});
}
void make_matrix(int n){
for(int i=2;i<n;++i) a[i][i]=1;
a[n][1]=1;
for(auto [x,y]:e) a[x][y]=-1;
}
int main(){
int _; scanf("%d",&_); //cin>>_;
int max_bit=15;
while(_--){
memset(a,0,sizeof(a)); e.clear();
int n=1;
for(int i=0;i<max_bit;++i){
add(n,n+1); add(n+1,n+2); add(n+2,n+3); add(n+3,n+4);
add(n+1,n+3); add(n+2,n+4);
n+=4;
}
int x,bit_1=1,bit_2=4; scanf("%d",&x); ++n;
for(int i=0;i<max_bit;++i){
if (x%3==1) add(bit_1,n),add(n,n+1),++n;
if (x%3==2) add(bit_2,n),add(n,n+1),++n;
bit_1+=4; bit_2+=4; x/=3;
}
make_matrix(n);
cout << n <<'\n';
for (int r = 1; r <= n; r++) {
for (int c = 1; c <= n; c++) {
printf("%d ",a[r][c]);
}
cout << '\n';
}
}
}
We can consider using Tridiagonal Matrices, where the advantage lies in transforming the calculation of the determinant value into a recursive formula.
Let $$$d_{i,j}$$$ denote the element in the i-th row and j-th column of matrix $$$D$$$, $$$D_k$$$ represents the submatrix composed of the upper-left $$$k \times k$$$ elements, and $$$A_k$$$ denotes the determinant value of $$$D_k$$$.
When $$$k \ge 2$$$, there are two ways to obtain $$$D_{k+1}$$$ from $$$D_k$$$:
- Let $$$ d_{k+1,k+1}=1, d_{k+1,k}=1,d_{k,k+1}=1$$$ to get $$$A_{k+1}=A_k-A_{k-1}$$$
- Let $$$ d_{k+1,k+1}=1, d_{k+1,k}=1,d_{k,k+1}=-1$$$ to get $$$A_{k+1}=A_k+A_{k-1}$$$
Thus, we transform the problem into constructing a sequence $$$A_1,\ldots,A_m$$$ satisfying:
- $$$A_1 = |D_1|, A_2 = |D_2|$$$
- For any $$$k=2,3,\ldots,m-1$$$, one of the following two equations holds: $$$A_{k+1}=A_k-A_{k-1}$$$, or $$$A_{k+1}=A_k+A_{k-1}$$$
- $$$A_m=n (m \le 80)$$$
Consider recursively generating the entire sequence from $$$A_m=n$$$ backwards.
Choose a positive integer as $$$A_{m−1}$$$.
At a certain moment, we have $$$A_k$$$ and $$$A_{k+1}$$$, and obtain $$$A_{k−1}$$$ as follows:
If $$$A_k \gt A_{k+1}$$$, let $$$A_{k-1}=A_k-A_{k+1}$$$; otherwise, let $$$A_{k-1}=A_{k+1}-A_k$$$
This ensures that each number in the sequence is a non-negative integer.
If at some point there exist $$$D_1,D_2$$$ satisfying $$$A_k = |D_1|, A_{k+1} = |D_2|$$$, successfully finish the construction.
For example,if we have $$$A_m=7$$$ and $$$A_{m−1}=3$$$, the final sequence will be $$$1,2,3,1,4,3,7$$$, and we will generate the matrix as
Denote $$$(A_k,A_{k+1})=(x,y)$$$.
When $$$x \gt y$$$, the recursion process can be written as $$$(x,y) \rightarrow (x-y,x) \rightarrow (y,x-y)$$$, increasing the length of the sequence by $$$2$$$, completing one iteration of subtraction.
When $$$x \le y$$$, the recursion process can be written as $$$(x,y) \rightarrow (y-x,x)$$$, increasing the length of the sequence by $$$1$$$, completing one iteration of subtraction.
The process will eventually lead to $$$(x,0)$$$ or $$$(0,x)$$$, where $$$x$$$ is the GCD of $$$A_m$$$ and $$$A_{m−1}$$$.
When $$$A_m$$$ and $$$A_{m−1}$$$ are coprime, we have $$$x=1$$$, meeting the requirements for the sequence. In fact, we can stop earlier when $$$(1,2)$$$, and the sequence is still legal.
When $$$A_m$$$ and $$$A_{m−1}$$$ are not coprime, we have $$$x \gt 1$$$, and this method can't obtain a sequence that meets the requirements.
Given $$$A_m=n$$$, we need to find a suitable $$$A_{m−1}$$$ such that the length of the final sequence is less than or equal to $$$80$$$.
Since $$$80$$$ is a relaxed constraint, and you only need to find one valid $$$A_{m−1}$$$, you can try various approaches.
For example, you could enumerate all possibilities within the range $$$[1, n-1]$$$ for some large $$$n$$$, and then discover that there are many valid $$$A_{m−1}$$$ values that meet the criteria.
If you randomly search for $$$A_{m−1}$$$ within the range, the expected number of attempts required is very low. In fact, the verification program tested all integers $$$n$$$ from $$$10$$$ to $$$10^7$$$, randomly selecting $$$1000$$$ possible $$$A_{m−1}$$$ values for each n and testing them. The results showed that in the worst case (when $$$n = 9827370$$$), there were still $$$24$$$ valid $$$A_{m−1}$$$ values that met the requirements out of the $$$1000$$$ randomly chosen values.
Code:
#include<bits/stdc++.h>
using namespace std;
int fl[1005],inf=1e9,cnt;
int work(int x,int y){
int ans=0,tmp;
while(y&&ans<=80){
++ans; tmp=y;
if (x>y) y=x-y; else y=y-x;
x=tmp;
}
if (x>1) return inf;
return ans;
}
void gen(int x,int y){
//printf("%d\n",work(x,y));
vector<int> seq; seq.clear(); seq.push_back(x); seq.push_back(y);
int tmp;
while(!(x==2&&y==1)){
tmp=y;
if (x>y) y=x-y; else y=y-x;//x=tmp;
x=tmp;
seq.push_back(y);
}
int ans[105][105]={0};
int pre=seq.back(),n=1; seq.pop_back(); ans[1][1]=1;
while(!seq.empty()){
++n;
if (seq.back()>pre) ans[n][n]=1,ans[n][n-1]=1,ans[n-1][n]=-1;
else ans[n][n]=1,ans[n][n-1]=1,ans[n-1][n]=1;
pre=seq.back(); seq.pop_back();
}
printf("%d\n",n);
for(int i=1;i<=n;++i){
for(int j=1;j<=n;++j){
printf("%d ",ans[i][j]);
}
printf("\n");
}
}
int main(){
int _; cin>>_; mt19937 rd(time(0));
while(_--){
int n; cin>>n;
if (n==0) {printf("1\n0\n"); continue;}
if (n==1) {printf("1\n1\n"); continue;}
while(1){
int y=rd()%(n-1)+1;
int tmp=work(n,y);
if (tmp<=80) {gen(n,y); break;}
}
}
return 0;
}
Verification Program:
#pragma GCC optimize(3)
#include<bits/stdc++.h>
using namespace std;
int inf=1e9;
int work(int x,int y){
int ans=0,tmp;
while(y&&ans<=80){
++ans; tmp=y;
if (x>y) y=x-y; else y=y-x;
x=tmp;
}
if (x>1) return inf;
return ans;
}
int main(){
mt19937 rd(114514);
int n=1e7,minn=1e3;
for(int x=n;x>=5;x--){
int _=1e3,c=0;
while(_--){
int y=rd()%(x-1)+1;
int tmp=work(x,y);
if (tmp<=80) ++c;
}
if (c<minn) minn=c,cout<<' '<<minn<<' '<<x<<endl;
if (x%10000==1) cout<<clock()<<'#'<<x<<endl;
}
return 0;
}
2139E2 — Determinant Construction (Hard Version)
Idea: wjy666 Preparation: wjy666
Since $$$50$$$ is a very tight constraint, the author did not find a method to construct the DAG for G2 as in Solution 1 of G1. If you have a better solution, please leave a comment below.
In Solution 2, the first two steps remain consistent with G1, but we need a better method to find a suitable $$$A_{m−1}$$$ such that the length of the final sequence is less than or equal to $$$50$$$. Randomly searching for $$$A_{m−1}$$$ within the range $$$[1, n-1]$$$ may result in TLE.
We noticed that if $$$A_{m−1}$$$ and $$$A_m$$$ are two adjacent fibonacci numbers, the sequence will converge rapidly (with a length within $$$40$$$). Although not all $$$A_m=n$$$ are fibonacci numbers, we can still draw inspiration from this.
We attempt to decompose $$$n$$$ as $$$n=a \cdot \operatorname{fib}(i)+b \cdot \operatorname{fib}(i-1), a,b \ge 0$$$, where $$$\operatorname{fib}(i)$$$ denotes the $$$i$$$-th fibonacci number.
Then, we set $$$A_{m-1}=a \cdot \operatorname{fib}(i-1)+b \cdot \operatorname{fib}(i-2)$$$
By iterating further, we obtain $$$A_{m-2}=a \cdot \operatorname{fib}(i-2)+b \cdot \operatorname{fib}(i-3)$$$, $$$\dots$$$
The purpose of this is to make the iteration process rapidly decrease the value in the initial steps, similar to the fibonacci sequence. Intuitively, the larger the $$$i$$$ chosen, the higher the probability of finding a legal $$$A_{m-1}$$$.
The author did not provide a rigorous mathematical proof for this, but using a verification program which found a legal $$$A_{m-1}$$$ for all $$$A_m=n \in [2,10^7]$$$ within one minute. Therefore, it is feasible to prove the correctness of this approach within the data range during the contest.
#include<bits/stdc++.h>
using namespace std;
int fl[1005],inf=1e9,cnt;
int work(int x,int y){
int ans=0,tmp;
while(y&&ans<=50){
++ans; tmp=y;
if (x>y) y=x-y; else y=y-x;
x=tmp;
}
if (x>1) return inf;
return ans;
}
void gen(int x,int y){
//printf("%d\n",work(x,y));
vector<int> seq; seq.clear(); seq.push_back(x); seq.push_back(y);
int tmp;
while(!(x==2&&y==1)){
tmp=y;
if (x>y) y=x-y; else y=y-x;//x=tmp;
x=tmp;
seq.push_back(y);
}
int ans[55][55]={0};
int pre=seq.back(),n=1; seq.pop_back(); ans[1][1]=1;
while(!seq.empty()){
++n;
if (seq.back()>pre) ans[n][n]=1,ans[n][n-1]=1,ans[n-1][n]=-1;
else ans[n][n]=1,ans[n][n-1]=1,ans[n-1][n]=1;
pre=seq.back(); seq.pop_back();
}
printf("%d\n",n);
for(int i=1;i<=n;++i){
for(int j=1;j<=n;++j){
printf("%d ",ans[i][j]);
}
printf("\n");
}
}
typedef long long ll;
ll fib[55]={1,1};
ll exgcd(ll a,ll b,ll &x,ll &y){
if(!b){
x=1; y=0; return a;
}
ll d=exgcd(b,a%b,y,x);
y-=(a/b)*x;
return d;
}
map<int,int> mp;
int main(){
for(int i=2;i<=50;++i) fib[i]=fib[i-1]+fib[i-2];
int _; cin>>_;
while(_--){
int n; cin>>n;
if (n==0) {printf("1\n0\n"); continue;}
if (n==1) {printf("1\n1\n"); continue;}
if (n<=10){
gen(n,n-1); continue;
}
if (mp[n]) {gen(n,mp[n]); continue;}
int noww=0;
while(fib[noww]<n) ++noww;
int now=noww;
while(now>=3){
ll a=fib[now-1],b=fib[now-2],x,y,c=n;
ll d=exgcd(a,b,x,y);
x*=c,y*=c;
//find x,y
x=(x%b+b)%b; y=(c-a*x)/b; if (y<0) {--now; continue;}
int s=inf,o;
while(y>=0){
o=fib[now-2]*x+fib[now-3]*y;
s=work(n,o);
if (s>50) {x+=b,y-=a; continue;}
else break;
}
if (s>50) {--now; continue;}
else{
gen(n,o); mp[n]=o; break;
}
}
//
}
return 0;
}
We note that for the fibonacci sequence, $$$\lim_{x \to \infty} \frac{\operatorname{fib}(x-1)}{\operatorname{fib}(x)} = \frac{\sqrt 5 -1}{2}$$$, which suggests that searching for $$$A_{m-1}$$$ around $$$\frac{\sqrt 5 -1}{2}n$$$ might be a promising approach.
In fact, directly enumerating $$$A_{m-1}$$$ starting from $$$\frac{\sqrt 5 -1}{2}n$$$ and trying each one is a efficient method. The author also verified through a verification program that for all $$$A_m=n \in [2,10^7]$$$, a legal $$$A_{m-1}$$$ was found within one minute. Thus, it is feasible to prove the correctness of this approach within the data range during the contest.
Considering that the time limit for this problem is relatively loose (5s), most attempts to search for $$$A_{m-1}$$$ around $$$\frac{\sqrt 5 -1}{2}n$$$ may be able to pass.
#include<bits/stdc++.h>
using namespace std;
int fl[1005],inf=1e9,cnt;
int work(int x,int y){
int ans=0,tmp;
while(y&&ans<=50){
++ans; tmp=y;
if (x>y) y=x-y; else y=y-x;
x=tmp;
}
if (x>1) return inf;
return ans;
}
void gen(int x,int y){
//printf("%d\n",work(x,y));
vector<int> seq; seq.clear(); seq.push_back(x); seq.push_back(y);
int tmp;
while(!(x==2&&y==1)){
tmp=y;
if (x>y) y=x-y; else y=y-x;//x=tmp;
x=tmp;
seq.push_back(y);
}
int ans[55][55]={0};
int pre=seq.back(),n=1; seq.pop_back(); ans[1][1]=1;
while(!seq.empty()){
++n;
if (seq.back()>pre) ans[n][n]=1,ans[n][n-1]=1,ans[n-1][n]=-1;
else ans[n][n]=1,ans[n][n-1]=1,ans[n-1][n]=1;
pre=seq.back(); seq.pop_back();
}
printf("%d\n",n);
for(int i=1;i<=n;++i){
for(int j=1;j<=n;++j){
printf("%d ",ans[i][j]);
}
printf("\n");
}
}
map<int,int> mp;
int main(){
int _; cin>>_;
while(_--){
int n; cin>>n;
if (n==0) {printf("1\n0\n"); continue;}
if (n==1) {printf("1\n1\n"); continue;}
if (mp[n]) {gen(n,mp[n]); continue;}
double s=(sqrt(5)-1.0)/2.0;
int st=s*n,l=max(1,st),r=n-1;
//int l=1,r=n-1;
int nowy,nows=50;
if (n%2==0){
if (l%2==0) ++l;
for(int y=l;y<=r;y+=2){
int tmp=work(n,y);
if (tmp<=nows) {gen(n,y); mp[n]=y; break;}
}
}
else{
for(int y=l;y<=r;++y){
int tmp=work(n,y);
if (tmp<=nows) {gen(n,y); mp[n]=y; break;}
}
}
}
return 0;
}
2139F — Ode to the Bridge Builder
Idea: installb Preparation: installb
For convenience, we mark the targets point and the points on the initial segment with $$$A(0,0),B(1,0),C(p,q)$$$.
The problem asks us to do at most $$$\left\lceil 2|AC|\right\rceil$$$ moves. This constraint is actually not quite usual. We can first show that the theoretical lower bound should be $$$x=\left\lceil |AC|\right\rceil+\left\lceil |BC|\right\rceil−1$$$. We can show we can't get less moves by following: If we have already constructed the final structure, then $$$AC$$$ and $$$BC$$$ should be connected with a path, and we can find a path from $$$A$$$ to $$$C$$$ and a path from $$$B$$$ to $$$C$$$ that don't intersect (except at point $$$C$$$), otherwise the structure can't be built with non-degenerate triangles.
And we have to alternate progressing through two paths every time we construct a triangle, otherwise we must draw a line longer than $$$1$$$. The last triangle will cover both paths, so the number of steps will be decreased by one. The optimal path is two straight segments (which is not always possible).
Case 1.1: We first try some simple strategies. Two triangles can form a parallelogram, so we can connect $$$AC$$$, choose a point $$$D$$$ on $$$AC$$$ which makes $$$|AD|=\frac{|AC|}{\lceil|AC|\rceil}$$$, which is always in range $$$[0.5,1]$$$ when $$$|AC|\ge 1$$$. And from $$$\triangle ADB$$$ we can construct a parallelogram. We draw a line passing $$$D$$$ that is parallel to $$$AB$$$, a line passing $$$B$$$ that is parallel to $$$AC$$$, and they intersect at $$$E$$$. Then we pile up the same parallelogram structure until we reach $$$C$$$. This always works when $$$30^\circ\le\angle CAB\le60^\circ$$$.
Proof: We first prove that all segments have length between $$$0.5$$$ and $$$1$$$. Because we stack the same parallelogram structure, all segments have length equal to $$$AB,BD$$$ or $$$AD$$$. $$$AB$$$ and $$$AD$$$ already meet the requirement. When $$$30^\circ\le\angle CAB\le60^\circ$$$, the length of $$$BD$$$ is at least $$$0.5$$$ as the distance between two parallel lines is at least $$$0.5$$$, and at most $$$1$$$ as $$$AB=1$$$ and $$$\angle CAB$$$ is never the largest angle in $$$\triangle DAB$$$. We can calculate that the number of moves we used is $$$2\left\lceil |AC|\right\rceil-1\le\left\lceil 2|AC|\right\rceil$$$ which matches the requirement (for the last parallelogram we only need one triangle).

Case 1.2: We can change the way we construct parallelograms (with sides $$$AB,BC$$$) and this method always works when $$$30^\circ\le\angle CBx\le60^\circ$$$. The number of moves is $$$\left\lceil2|BC|\right\rceil\le \left\lceil 2|AC|\right\rceil$$$. This case is actually necessary, which will be explained in the following part.

Case 2: If $$$\angle CAB \lt \angle CBx \lt 30^\circ$$$. We first build a point $$$D$$$ which makes $$$AD=1$$$ and $$$\angle DAB=30^\circ$$$ and construct $$$\triangle DAB$$$.
We can show that the constraint is actually easier to meet in this situation, because when $$$\angle CAB \lt \angle CBx \lt 30^\circ$$$, $$$|BC| \lt |AC|-0.5$$$. With this, $$$\left\lceil |AC|\right\rceil+\left\lceil |BC|\right\rceil−1=\left\lceil 2|AC|\right\rceil-1$$$ always holds, allowing us one extra move.
Then we draw a line passing $$$D$$$ parallel to $$$BC$$$. We can show that if $$$\angle DAB=30^\circ$$$ and $$$0\le \angle CAB\le 30^\circ$$$, the distance from $$$D$$$ to line $$$BC$$$ is at least $$$0.5$$$. So all segments with ends on different lines will have length $$$\ge 0.5$$$.
Then we select a point $$$E$$$ on $$$BC$$$, which makes $$$DE=1$$$. We can show that $$$\sqrt{3}-1\le |BE|\le 1$$$ based on some calculations in this case. Then we do a similar parallelogram construction process to $$$C$$$. The total number of moves is $$$2\left\lceil |CE|\right\rceil+2$$$.
There are still two cases here to analyze to prove the number of steps fulfills our requirement. We write $$$|BC|=a+b$$$ where $$$a$$$ is the integer part of $$$|BC|$$$, then we analyze cases about $$$b$$$.
Case 2.1: When $$$b\ge\sqrt{3}-1$$$ or $$$b=0$$$, we can show that $$$2\left\lceil |AC|\right\rceil=\left\lceil 2|AC|\right\rceil$$$, and $$$\left\lceil |AC|\right\rceil=\left\lceil |BC|\right\rceil+1$$$ if $$$|AC|\ge 2$$$. The number of steps is $$$2\left\lceil |EC|\right\rceil+2\le 2\left\lceil |BC|\right\rceil+2=2\left\lceil |AC|\right\rceil$$$.
Case 2.2: When $$$0 \lt b \lt \sqrt{3}-1$$$, we can show that $$$\left\lceil |EC|\right\rceil=\left\lceil |BC|\right\rceil-1$$$. The number of steps equals to $$$2+2\left\lceil |CE|\right\rceil=2\left\lceil |BC|\right\rceil\le \left\lceil |AC|\right\rceil+\left\lceil |BC|\right\rceil=\left\lceil 2|AC|\right\rceil$$$.
Be careful with small cases where $$$|BC|$$$ is too short, which may make $$$|DF|=|EG| \lt 0.5$$$.
Note that there are some cases where $$$\angle CBx \gt 30^\circ$$$ and $$$\angle CAB \lt 30^\circ$$$, where Case 2 strategy doesn't work. So case 1.2 is required.

Case 3: If $$$\angle CBx \gt \angle CAB \gt 60^\circ$$$. We first build a point $$$D$$$ which makes $$$AD=1$$$ and $$$\angle DAB=60^\circ$$$ and construct $$$\triangle DAB$$$. We write $$$|AC|=a+b$$$ where $$$a$$$ is integer, then we analyze two cases about $$$b$$$.
Case 3.1: When $$$b \gt 0.5$$$ or $$$b=0$$$, $$$2\left\lceil |AC|\right\rceil=\left\lceil 2|AC|\right\rceil$$$. We begin a same parallelogram construction as Case 1 where the sides of parallelogram are $$$BC,BD$$$. We can show that $$$30^\circ\le\angle CBD\le60^\circ$$$. So it is possible to pile up to $$$C$$$ from $$$BD$$$ in $$$2\left\lceil |BC|\right\rceil-1$$$ moves. And in total we need $$$2\left\lceil |BC|\right\rceil\le2\left\lceil |AC|\right\rceil=\left\lceil 2|AC|\right\rceil$$$ moves.

Case 3.2: When $$$0 \lt b\le 0.5$$$, $$$2\left\lceil |AC|\right\rceil=\left\lceil 2|AC|\right\rceil+1$$$. We instead connect $$$CD$$$ and construct parallelogram with sides $$$DC,DB$$$. We can show that $$$120^\circ\le\angle CDB\le150^\circ$$$ so we can still do the parallelogram construction. When $$$|AC|\ge \sqrt{3}$$$, $$$\left\lceil |CD|\right\rceil=\left\lceil |AC||\right\rceil-1$$$, making the number of moves $$$2\left\lceil |CD|\right\rceil+1=2\left\lceil |AC|\right\rceil-1=\left\lceil 2|AC|\right\rceil$$$.

Actually case 1.1 is not required, because it is impossible to construct a testcase where $$$\angle CAB \lt 60^\circ, \angle CBx \gt 60^\circ,0 \lt b\le 0.5$$$ and Case 3.2 fails.
#include <bits/stdc++.h>
using namespace std;
typedef double db;
const db pi = acos(-1.0);
const int N = 1000005;
inline int sgn(db x){
if(x < 1e-13 && x > -1e-13) return 0;
if(x > 0) return 1;
return -1;
}
struct point{
db x,y;
point (db _x = 0.0,db _y = 0.0) : x(_x), y(_y) {}
};
point operator + (const point &p1,const point &p2){
return point(p1.x + p2.x,p1.y + p2.y);
}
point operator - (const point &p1,const point &p2){
return point(p1.x - p2.x,p1.y - p2.y);
}
point operator * (db x,const point &p){
return point(x * p.x,x * p.y);
}
bool operator == (point x,point y){
return sgn(x.x - y.x) == 0 && sgn(x.y - y.y) == 0;
}
inline db dot(point p1,point p2){
return p1.x * p2.x + p1.y * p2.y;
}
inline db det(point p1,point p2){
return p1.x * p2.y - p2.x * p1.y;
}
inline db len(point p){
return sqrtl(1.0 * p.x * p.x + 1.0 * p.y * p.y);
}
point project_point(point x,point ya,point yb){
point v = yb - ya;
return ya + (dot(v,x - ya) / dot(v,v)) * v;
}
db distp(point x,point ya,point yb){
return fabs(det(ya - x,yb - x)) / len(yb - ya);
}
point line_circle_intersec_point(point o,point la,point lb){
db dis = distp(o,la,lb),l;
point pj = project_point(o,la,lb);
l = sqrt(1.0 - dis * dis);
point ret = pj + (l / len(lb - la)) * (lb - la);
return ret;
}
tuple <int,int,int> ans[N];
point p[N],target;
int step;
bool check(int n){
set <pair <int,int> > st;
st.insert({1,2});
if(n > step) return 0;
db eps_len = 1e-8,eps_dis = 1e-4;
int fl = 0;
for(int i = 1;i <= n;i ++){
auto [u,v,w] = ans[i];
if(u > v) swap(u,v);
if(st.find({u,v}) == st.end()) return 0;
if(len(p[u] - p[w]) < 0.5 - eps_len || len(p[u] - p[w]) > 1.0 + eps_len) return 0;
if(len(p[v] - p[w]) < 0.5 - eps_len || len(p[v] - p[w]) > 1.0 + eps_len) return 0;
st.insert({u,w}); st.insert({v,w});
if(len(p[w] - target) <= eps_dis) fl = 1;
}
return 1;
}
int solve1a(){
point dir = target - p[1];
int split = ceil(len(dir) - 1e-9);
int c = 2;
for(int i = 1;i < split;i ++){
++ c; p[c] = p[1] + (1.0 * i / split) * dir;
ans[c - 2] = {c - 2,c - 1,c};
++ c; p[c] = p[2] + (1.0 * i / split) * dir;
ans[c - 2] = {c - 2,c - 1,c};
// construct one parallelogram
}
++ c; p[c] = target;
ans[c - 2] = {c - 2,c - 1,c};
if(check(c - 2)) return c - 2;
return 0;
}
int solve1b(){
point dir = target - p[2];
int split = ceil(len(dir) - 1e-9);
int c = 2;
for(int i = 1;i <= split;i ++){
++ c; p[c] = p[1] + (1.0 * i / split) * dir;
ans[c - 2] = {c - 2,c - 1,c};
++ c; p[c] = p[2] + (1.0 * i / split) * dir;
ans[c - 2] = {c - 2,c - 1,c};
}
if(check(c - 2)) return c - 2;
return 0;
}
int solve2(){
p[3] = point(sqrt(3.0) / 2,0.5);
p[4] = line_circle_intersec_point(p[3],p[2],target);
if(len(p[4] - p[2]) > 1)
p[4] = p[2] + (1.0 / len(target - p[2])) * (target - p[2]);
ans[1] = {1,2,3};
ans[2] = {2,3,4};
point dir = target - p[4];
int split = ceil(len(dir) - 1e-9);
int c = 4;
for(int i = 1;i <= split;i ++){
++ c; p[c] = p[3] + (1.0 * i / split) * dir;
ans[c - 2] = {c - 2,c - 1,c};
++ c; p[c] = p[4] + (1.0 * i / split) * dir;
ans[c - 2] = {c - 2,c - 1,c};
}
if(check(c - 2)) return c - 2;
return 0;
}
int solve3a(){
p[3] = point(0.5,sqrt(3.0) / 2);
ans[1] = {1,2,3};
point dir = target - p[2];
int split = ceil(len(dir) - 1e-9);
int c = 3;
for(int i = 1;i < split;i ++){
++ c; p[c] = p[2] + (1.0 * i / split) * dir;
ans[c - 2] = {c - 2,c - 1,c};
++ c; p[c] = p[3] + (1.0 * i / split) * dir;
ans[c - 2] = {c - 2,c - 1,c};
}
++ c; p[c] = target;
ans[c - 2] = {c - 2,c - 1,c};
if(check(c - 2)) return c - 2;
return 0;
}
int solve3b(){
p[3] = point(0.5,sqrt(3.0) / 2);
ans[1] = {1,2,3};
point dir = target - p[3];
int split = ceil(len(dir) - 1e-9);
int c = 3;
for(int i = 1;i <= split;i ++){
++ c; p[c] = p[2] + (1.0 * i / split) * dir;
ans[c - 2] = {c - 2,c - 1,c};
++ c; p[c] = p[3] + (1.0 * i / split) * dir;
ans[c - 2] = {c - 2,c - 1,c};
}
if(check(c - 2)) return c - 2;
return 0;
}
int solve(){
if(int res = solve1a()) return res;
if(int res = solve1b()) return res;
if(int res = solve2()) return res;
if(int res = solve3a()) return res;
if(int res = solve3b()) return res;
return 0;
}
int main(){
int TC; scanf("%d",&TC);
p[1] = point(0,0);
p[2] = point(1,0);
while(TC --){
int tx,ty;
scanf("%d %d %d",&tx,&ty,&step);
step = ceil(2.0 * sqrt(1.0 * tx * tx + 1.0 * ty * ty));
target = point(tx,ty);
int n = solve();
assert(n > 0);
printf("%d\n",n);
for(int i = 1;i <= n;i ++){
auto [u,v,w] = ans[i];
printf("%d %d %.12lf %.12lf\n",u,v,p[w].x,p[w].y);
}
}
return 0;
}








Auto comment: topic has been updated by installb (previous revision, new revision, compare).
.
The E2 construction seems to be related to last week's Project Euler problem: Euclid's Labour
An easier way to construct the matrix from the DAG in solution 1 of E1:
Let $$$A$$$ be the adjacency matrix, where $$$A_{i,j}=1$$$ if $$$i\rightarrow j$$$ exists and 0 otherwise. Then, $$$(I-A)^{-1}=I+A+A^2+\ldots$$$, so $$$(I-A)^{-1}[1][n]$$$ counts the number of directed paths from $$$1$$$ to $$$n$$$. By Cramer's rule, since $$$det(I-A)=1$$$ (it's triangular), we also have $$$(I-A)^{-1}[1][n]=adj(I-A)[1][n]=(-1)^{1+n}det(B)$$$ where $$$B$$$ is the $$$(n,1)$$$ minor of $$$I-A$$$, so it suffices to output $$$B$$$ and possibly flip a row.
E looks nice!
Managed to squeeze randomized solutions to E1 (337671518) and E2 (337675705), but after the contest (didn't participate).
one "ll"(long long) costed me problem b and c in the contest :)
Why was the problem statement changed in Div 2 Problem D, Antiamuny Wants to Learn Swap? Can someone please show me an example where swapping 2 apart more than once is more optimal than only swapping one apart, but swapping 2 apart once is the same number of steps as swapping only one apart.
Take the case of
3 4 1 2.WTH is this "Monotonic Stack" work O(N) ?!.. With O(1) "extra" space . . .
For problem C1: I was on track, but I do not understand this jump.
Now the problem can be considered as a knapsack problem, where you have d items with weight c1, …, cd. We need to find out whether you can take several numbers from c that have sum equal to k. This can be solved in O(n^2).
I was thinking along the lines of having 2 currencies (k, n-k). Finding the maximum number of items I can buy either using currency 0 or currency 1.
Item costs would be c1, …, cd
Meaning: assigning all 1 or 0 to that current level of the tree → this will increase subsequence by 1.
I think this should be a well-known problem (since the editorial skipped this transition). It would be really helpful if someone provides more context or an article link.
Yep that was exactly how i was thinking.
Div1 C1/C2 can be solved in $$$\mathcal{O}(n*\log (n))$$$ by using formal power series of log function to compute the first $$$n+1$$$ terms of:
\begin{align*} \prod_{w=1}^{n} (x^{c_w} + 1) \end{align*}
Of course we need to do this under a NTT friendly modulo. Yes, I know this solution is “hackable” by choosing the input carefully so that the resulting numbers are divisible by the modulo, but I’m usually not interested in this (if we want, we could choose between multiple modulos randomly, use a few modulos, etc.).
Link to my submission: 337680547 (my implementation of exp/log generating functions is not the best and it has high constant).
Actually, you don’t need the modulo, you can just reset every coefficient greater than 1 to 1, since you only care whether they can be formed or not :)
submission: 337681349
line 91
Your solution is $$$\mathcal{O}(n*\log^2(n))$$$. I’m not using divide and conquer, I use generating functions (that way, I remove one $$$\log$$$).
Wow, that's new to me
i think this is exactly what you're talking about https://arxiv.org/pdf/1807.11597
Div2 E2/Div1 C2 can be solved by binary search on answer also. For any height, I am checking by greedily assigning zeros or ones from bottom to top.
https://mirror.codeforces.com/contest/2139/submission/337622955
Looks like it was hacked, in general doing something like this probably should contradict the NP-completeness of knapsack.
In Div.1 C2 / Div.2 E2, I don't think $$$\sum_{i=1}^{\sqrt{n}} \log c_i = O(\sqrt{n})$$$ is trivial. Here is a proof.
Under $$$\sum_{i=1}^{\sqrt{n}} i\cdot c_i = n$$$, maximaze $$$\sum_{i=1}^{\sqrt{n}} \log c_i = \frac{1}{\ln{2}} (\sum_{i=1}^{\sqrt{n}} \ln{c_i})$$$. $$$c_i \in \mathbb{R}$$$.
If $$$f(c_i)=\sum_{i=1}^{\sqrt{n}} \ln{c_i} - \lambda (\sum_{i=1}^{\sqrt{n}} i\cdot c_i)$$$ and then when maximazing $$$\sum_{i=1}^{\sqrt{n}} \log c_i$$$, it should have:
So,
So $c_i \cdot i$ is a constant.
Under $$$\sum_{i=1}^{\sqrt{n}} i \cdot c_i = n$$$, so $$$i\cdot c_i = \sqrt{n}$$$, so $$$c_i = \frac{\sqrt{n}}{i}$$$.
So $$$\max_{c_i} (\sum_{i=1}^{\sqrt{n}} \ln c_i) = \sum_{i=1}^{\sqrt{n}} \ln{\frac{\sqrt{n}}{i}} = \sum_{i=1}^{\sqrt{n}} \ln{\sqrt{n}} - \ln{i} \sim \sqrt{n} \ln{\sqrt{n}} - \int_{1}^{\sqrt{n}} \ln{i} di = O(\sqrt{n})$$$
Note that your final tilde doesn't work on asymptotic differences and any arithmetic that uses them: if $$$f(n) \sim a(n)$$$ and $$$g(n) \sim a(n)$$$, it doesn't mean $$$f(n) - g(n) \sim a(n)$$$. Instead, Stirling's approximation says $$$\sum_{i=1}^k \log i = \log k! = k \log k - k \log e + O(\log k)$$$, so $$$\sum_{i=1}^{\sqrt n} \log \frac{\sqrt{n}}{i} = \sqrt{n} \log e + O(\log n)$$$.
It only works out in this case because the next terms of the sum and integral are large enough and therefore similar enough. Consider the claim $$$\frac{n^2}{2} - \sum_{i=1}^n i \sim \frac{n^2}{2} - \int_{i=1}^n i \cdot \mathrm{d} i$$$: it'd simplify to $$$-\frac{n}{2} \sim \frac{1}{2}$$$, which is false.
Formal math is full of traps.
Thank you for pointing out the mistake.
Maybe this approach is also correct now?:
$$$\sum_{i=1}^{\sqrt{n}} (\ln\sqrt{n} - \ln i) \le \ln \sqrt{n} + \int_{1}^{\sqrt{n}} (\ln\sqrt{n} - \ln i) di = \sqrt{n} \ln \sqrt{n} - \int_{1}^{\sqrt{n}} \ln i di= O(\sqrt{n})$$$
Well, the first equality isn't correct or incorrect, it's just unproved. What are your elementary steps?
forward solution for div2C which i (elegantly) thought of last minute: after any operation Chocola's x becomes either x / 2 or x / 2 + 2^k, because Vanilla will always have 2^(k + 1) — x, so go over x bits (LSB to MSB) and its first 1 is already the 2^k at the beginning, and every operation we divide x by 2, if x's current bit is 1 we add 2^k and at the end just get rid of extra zeros at right
In problem E2, I have a $$$\log_{1.378}n$$$ solution if four non-zero numbers in a column is allowed.
Additionally a $$$\log_{1.319}n$$$ valid solution:(
Though having a terrible contest, I still hope you can enjoy these problems. For me, div1E is one of my favorite ideas, and I may have spent a hundred hours thinking through various scenarios for this problem.
Do you know how hard it is for Chinese middle school students to participate in competitions? Especially when they are held on weekdays.
If there are some hints before the solution, it will become better.
Is it possible to solve Div1A or Div2C using Dynamic programming by dp[k][k]? where dp[i][j] denotes we have performed i of 1st operation and j of 2nd operation....in every step,I will try to do 1st operation and 2nd operation if it is possible! my submission:337636880
I used greedy and binary search on C2 and surprisingly it worked.I tried to prove but I couldn't. I don't know whether the solution is correct or the test cases are weak.
submission
probably hackable as is the case here
During yesterday's contest I submitted Div2E1 with greedy (was sure it would fail) — 337614514 but it didn't and passed all TCs (I ended up submitting another solution for E1 hence it was skipped in main testing). Then I submitted exact same solution to Div2E2 — 337616400 which gave WA on TC13 instead of a TLE.
Since both problems only differ in time constraints, only explanation I found was weak pretest for Div2E1. But again after contest I submitted same code as I did in contest, it passed main tests as well — 337667320. Am I missing some point here or the tests for Div2E1 are actually weak?
337680242
A clean way to solve Div1A :)
may you explain this please?
At any time, if your current value is $$$C$$$, you either become
after applying the operation.
Visualize it:
Either this or this... you shift and then either add one at the $k$-th bit or not.
So... you can see that no bit is disappearing after you add them (you can't use any operation after a $$$1$$$ reaches the $$$0$$$-th bit; both become odd). The first one you add becomes the lowest bit, the second one becomes the second-lowest bit, etc.
If you want to build something like
starting from
So it's like you are just building it from the lowest bit.
Since the one for the lowest bit already exists ($$$2^k$$$), you start from the bit right after; if it's on, you use operation 2, otherwise use operation 1.
key pattern comes from observing the bits of x and y = 2^(k+1) — x, starting from LSB. If the bits of x and y are same, no operation was needed at this level. If they differ, operation must have occurred and person whose bit is 0 at this position was one who gave half of their cakes.
You can check this: 337705903
Very good problems! I like them
I think the solution of problem 2138C1 have a mistake. "If the maximum answer d is not achievable, we can show that you can always achieve w−1. " In my opinion,it would be d-1 instead of w-1.
regarding the editorial of div1C2. machine word length is denoted by $$$w$$$ (w), not $$$\omega$$$ (omega).
also, i have a different algorithm for subset sum in $$$O(n \sqrt n)$$$ time for this problem, and i think it has a simpler explanation. say, you have a bunch of elements of total weight $$$n$$$. consider any weight that appears three times: $$$x, x, x$$$. what weights can we get with these elements? we can get $$$0$$$, $$$x$$$, $$$2x$$$, and $$$3x$$$. i claim that i can get the same weights with only two elements! i only need $$$x$$$ and $$$2x$$$. if i replace $$$x, x, x$$$ with $$$x, 2x$$$ in the multiset of items, nothing changes. let's do this replacement process repeatedly. for each weight from $$$1$$$ to $$$n$$$ i will store the number of elements of this weight, then go through weights in the increasing order, and while some weight $$$x$$$ appears at least three times, i take two of its appearances and replace them with $$$2x$$$. here is the code snippet:
i claim that at the end of this process we have at most $$$O(\sqrt n)$$$ items. why? we did not change the sum of weights of all items. it is still $$$n$$$. however, now each weight appears in the list of items at most twice. it is a classical exercise that if the sum of some positive integers is equal to $$$n$$$, then there are at most $$$\sqrt{2n}$$$ distinct values among them. as in our multiset the multiplicity of each element is not larger than $$$2$$$, it implies that we have at most $$$2 \sqrt{2n}$$$ items in our final list. after that, we can solve subset sum trivially which would give time complexity $$$O(n \sqrt n)$$$ or $$$O(n \sqrt n / w)$$$ if we use bitsets.
i don't remember where i learned about this algorithm, so please share the source if you have it :)
There are many sources for that, the main one nowadays is https://mirror.codeforces.com/blog/entry/98663
thanks! i think this is exactly the one where i learned it.
Great comment!
I seem to have discovered a new solution(337665794) for Div1 E2(2138E2 - Determinant Construction (Hard Version)). First, construct a $$$50×50$$$ matrix (with row and column indices from 0 to 49) as follows:
Observing this matrix, we can see that all rows and columns with odd indices contain fewer than 3 non-zero elements. This means we can add non-zero elements to these odd-indexed rows or columns, with at most one addition per row or column. The initial determinant value of the matrix is 1. Considering adding non-zero elements in the upper-right region of the matrix, we find that if we modify the value at position $$$a_{ij}$$$ (where both $$$i$$$ and $$$j$$$ are odd, and $$$i \lt j$$$) to $$$k$$$, the determinant value increases by $$$k \times 2^{(j-i)/2 - 1}$$$, and modifications at different positions do not interfere with each other.
Thus, the original problem is transformed into representing $$$k-1$$$ as a sum of terms in the form $$$\pm 2^{k_1}, \pm 2^{k_2}, \dots, \pm 2^{k_t}$$$, where $$$k_1 \le k_2 \le \dots \le k_t$$$. The key point is that we can only find corresponding positions in the matrix for each exponent $$$k_i$$$ if adjacent exponents satisfy $$$k_i + 2 \le k_{i+1}$$$—if the exponents are too close together, it becomes difficult to satisfy the constraint of adding at most one non-zero element per row or column. I adopted the following method: check the binary representation of $$$k-1$$$ from the lowest to the highest bit. If the bit corresponding to $$$2^k$$$ is 1, then the term $$$\pm 2^k$$$ must be included. The strategy here is: if the bit corresponding to $$$2^{k+1}$$$ is also 1, we add $$$+2^k$$$; otherwise, we add $$$-2^k$$$. This way, once we add $$$\pm 2^k$$$, we won't add $$$\pm 2^{k+1}$$$ subsequently. Since the binary representation of k=1e7 has its highest three bits as "100..." in magnitude, the maximum term we need to add is only $$$\pm 2^{23}$$$, which matches requirements.
I used a bin-searching + greedy approach in both E1 and E2 div2, which got accepted. But now when I am trying to submit it again, it is getting WA on test 68. Who can disprove my solution?https://mirror.codeforces.com/contest/2139/submission/337639284
I 've solved E1 and E2 using binary search + greedy. Can anyone prove my solution? 337657306
I also used this approach, but it turned out to be wrong solution
installb typo in C1's editorial : w-1 => d-1
E1 question was very nice but in part E1 my solution passed on all test cases which uses a greedy approach after finding depths. But Greedy approach fails on the test case:
n=7,k=3 and parents followed by 1 2 2 3 4 4
Which confused me a lot during contest as the same solution failed on E2 because of missing test case in E1. I think there should be addition of such test case to E1 also as it lies within the constraints so that no one else faces the same problem.
Why is it always possible to achieve (min_depth_of_any_leaf) − 1 in E1?
https://mirror.codeforces.com/blog/entry/146172?#comment-1307613
?
For my fellow low rated participants if the following statement from C1's editorial isn't clear on why its true :
"As all leaves have the same depth, c_d should be the maximum element in the array, so you are always able to select a label for groups 1 to d−1"
think about this statement :
"if there are k sources which have varying levels of some thing and you need s units from one source -- as long as sum of number of units in all sources is >= k*s the source with maximum number of units will have >= s units of that thing to give !
this is because the average is established to be >= s and the maximum will always be >= average ! [pigeon hole principle]"
combined with the fact that the number of nodes in each level are non-decreasing so depths of all levels before the min depth of a leaf would have lesser than the average number of nodes per level we're concerned about -- you should now be able to reason why its true !
Now the problem can be considered as a knapsack problem, where you have d items with weight c1,⋯,cd . We need to find out whether you can take several numbers from c that have a sum equal to k . This can be solved in O(n2) .
Can you please explain how it got transformed to a knapsack problem?
I am new to the concept of bitsets.
Could someone explain the bitsets solution for C2? Like how to go about doing it?
has Rating get updated for this contest 1048 div 2? because mine is still not updated yet
See "Questions about problems".It was declared that this contest is unrated. Bro.
Problem C is an excellent problem, but I don't see the point of C2. Is it just to test bitset optimization? The $$$O(n\sqrt{n})$$$ multiple knapsack solution is so ingenious—why not make it a separate hard version instead?
How to prove the solution of 1A reach the minimum?
Now the problem can be considered as a knapsack problem, where you have d items with weight c1,⋯,cd . We need to find out whether you can take several numbers from c that have a sum equal to k . This can be solved in O(n2) .Can someone explain how it got transformed to a knapsack problem?
less of a knapsack.. more like mark all possible sums from subsets of the array
I think D1C is one of the coolest problems I've done! Here's a short editorial:
(By the way, since the answer is obviously bounded by the minimum depth of any leaf, just assume we've already truncated the tree by discarding all nodes with
depth > min_depthfrom here on out. This also means that instead of having to use exactly $$$k$$$ zeros, we can use anywhere between $$$\max(sz - (n - k), 0)$$$ and $$$\min(sz, k)$$$ zeros, where $$$sz$$$ is the number of nodes that still remain in the tree after truncation.)Off the bat, note that we're allowed a ton of freedom with how we fill values into the tree---the only constraint is the total number of 0s/1s, which gives us a huge number of possible configurations to work with. My friend andrewgopher would refer to such a problem as continuous.
This should make us suspect that we can never stray too far from a best-case coloring, in which every layer of the tree is either entirely $$$0$$$ or $$$1$$$. And this suspicion turns out to be correct, as we can show that we can always color nodes such that at most one layer contains both 0s and 1s. One simple way to do this is to run a BFS from the root until we hit $$$k$$$ nodes, color those $$$0$$$, and color all remaining nodes with $$$1$$$.
Therefore, we can always color the tree in a way such that at most $$$1$$$ layer has a mix of $$$0$$$s and $$$1$$$s. This is actually really nice because if, hypothetically, there were $$$2$$$ mixed layers, the LCS length could be either $$$d - 2$$$ or $$$d - 1$$$, where $$$d$$$ is the depth of the tree. But, if there is one mixed layer, we know that the LCS length must be $$$d - 1$$$, and if there's no mixed layers the LCS is obviously $$$d$$$. This greatly reduces the complexity of the problem.
Thus, the question then becomes whether we can achieve a coloring with no mixed layers---this can be done with a simple knapsack. Here's my submission: 337692707
And that's it! The only thing I disliked about this problem isn't even an issue with the problem itself: rather, it's the fact that going from C1 to C2 is just "do you know bitset?"
Touch this guy
Wow Cool explanation !!
I got confused why ans can be >= d -1
Now got your intution of bfs
Thank you very much man Could you please elaborate more on continuous problems
Two other problems I might consider "continuous" are 1658F - Juju and Binary String and 2128E2 - Submedians (Hard Version), but their solutions are quite different from this problem.
for DIV2D/DIV1A, if nxt[i] is the index of the next element that is less than a[i], can I assert that the result of [l, r] is NO only if nxt[nxt[l]] <= r ?
For me it sounds logic, and currenlty having wa2, can someone explain what I am missing?
I had a diffrent way of solving Cake Assignment here
Do check it out
Can anyone please tell me whats wrong with my solution to DIV2 D if possible? https://mirror.codeforces.com/problemset/submission/2138/340803037
I think this solution: 355700213 for problem 2138C1 - Maple and Tree Beauty (Easy Version) can be hacked. It only checks for sum k (or n — k) and ignores other valid distributions.
Yeah ,E1 having more than 60 test cases still accepting wrong submissions
how are test cases of E1 so weak, my solution got accepted for E1 ,but got wrong ans on test 3 for E2
aree vaidik bhai idhr ,remember jee grind tele grp
I remember jee grind but I don't remember u
lol ,u wont
Have we ever talked?
ig we just maybe talked btw the doubts we used to ask,u may know tanish wgera,
Yes yes
Im late but i got an easy wave to prove the answer of C1/C2 is at least d — 1 (where d is min depth of all leaves). Assume there are at 2 group $$$i_1$$$ and $$$i_2$$$ that have size $$$c_1$$$ and $$$c_2$$$. WLOG, assume $$$c_1 \lt = c_2$$$, using Pigeonhole Principle, we know that there is at least $$$\lceil \frac{c_1 + c_2}{2} \rceil$$$ $$$0's$$$ or $$$1's$$$ in both group. We have: $$$\lceil \frac{c_1 + c_2}{2} \rceil \gt = \lceil \frac{2c_1}{2} \rceil = c_1$$$, which mean we can fill $$$i_1$$$ with $$$0's$$$ or $$$1's$$$. Apply this over and over again we can reduce mixed group to one or zero