Share your predictions for WF #46 and WF #47 <3
# | User | Rating |
---|---|---|
1 | ecnerwala | 3648 |
2 | Benq | 3580 |
3 | orzdevinwang | 3570 |
4 | cnnfls_csy | 3569 |
5 | Geothermal | 3568 |
6 | tourist | 3565 |
7 | maroonrk | 3530 |
8 | Radewoosh | 3520 |
9 | Um_nik | 3481 |
10 | jiangly | 3467 |
# | User | Contrib. |
---|---|---|
1 | maomao90 | 174 |
2 | adamant | 164 |
2 | awoo | 164 |
4 | TheScrasse | 160 |
5 | nor | 159 |
6 | maroonrk | 156 |
7 | -is-this-fft- | 150 |
8 | SecondThread | 147 |
9 | orz | 146 |
10 | pajenegod | 145 |
Share your predictions for WF #46 and WF #47 <3
Tomorrow (in 12 hours) is the main day of my first ICPC WF! The preparation is over. We are not worried, but, unfortunately, after all the rounds of Ptz, UCup, previous years' ICPCps, it is clear to us that we will not win a medal without any luck. Please cheer for us and wish us luck! For two of our team, including me, this is our last attempt at the finals.
You should be able to find us tomorrow somewhere on https://www.youtube.com/@ICPCLive/, we will be 47th WF St. Petersburg State University team.
In a 4-hour contest (Google Code Jam 2018 Final), he had 0 points for 3 hours and 45 minutes, and was at the bottom of the rank list.
But in the last 15 minutes something happened that he took the second position at once. How?
He was trying to target the most difficult problem and the 15th, I repeat, 15th submission was accepted as it was accepted and the Legend came up next to the tourist !
Risk is one such thing, which if missed is a big loss, but if taken, unimaginable results are at hand. But that risk must be calculated, time and energy must be invested behind it, there must be a promise not to give up, and there must be a mindset to accept the results...
Contests can have last-minute twists as if it is simulating real life.
Errichto is addressed as "Sir" by many in the coding community. He truly deserves the title "Sir".
https://www.youtube.com/watch?v=tNmgmwEtoWE
Watch here, how a senior software engineer debunks devin. They just faked their demo. However, the devin team may have achieved their goal. They got $20 million in funding from Peter Thiel. The founders are now set for life. Congratulations to them.
I hate it how normalized faking demos has become. Just few months ago google faked their Gemini demo. It reminds of the dot com bubble, where companies put a dot-com in their names, to get higher valuations and funding and ultimately duped many investors. Only a few companies survived the dot com bubble and majority of them failed.I think the same is happening currently. Companies are putting "AI" in their names to create hype and raise funding. Some of them may become successful, but a great majority are going to fail and disappoint a lot of people. As the saying goes — Those who don't remember the past are condemned to repeat it. I don't want to sound like a pessimist, but it is necessary to be skeptical about whatever we see on the internet(especially when it comes to demos).
We invite you to participate in CodeChef’s Starters130, this Wednesday, 17th April, rated for till 6-Stars(ie. for users with rating < 2500).
Time: 8:00 PM — 10:00 PM IST
Joining us on the problem setting panel are:
Tester: Aryaman WAtoAC2001 Das.
Text Editorialists:Nishank IceKnight1093 Suresh.
Statement Verifier: Nishank IceKnight1093 Suresh.
Contest Admin : Mridul MridulAhi Ahi.
Special Thanks to MadRat_0, sv1shan and Atekichan for helping in problem preparation and their valuable feedback.
CodeFest'24 is the technical fest of the Computer Science and Engineering Department of IIT BHU, and this contest is conducted under its Competitive Programming event Manthan.
Note: Prizes are only for Indian participants (for division 1 only).
Winner: $$$25,000$$$ INR
$$$1^{st}$$$ runner up: $$$20,000$$$ INR
$$$2^{nd}$$$ runner up: $$$10,000$$$ INR
Rank $$$4^{th}$$$ to $$$8^{th}$$$: $$$1,000$$$ INR
Please fill out this form to avail them. You will receive the prizes within 7 working days for transfers within India.
Details of other events organized under CodeFest'24 can be found here.
Written editorials will be available for all on discuss.codechef.com. Pro users can find the editorials directly on the problem pages after the contest. The video editorials of the problems will be available only to Pro users.
Also, if you have some original and engaging problem ideas, and you’re interested in them being used in CodeChef's contests, you can share them here. Hope to see you participating.
Good Luck!
UPD: Editorial is out.
The World Finals are right ahead, and if everything works as I expect, you will probably have to wait a lot of time before the contest, dress rehearsals, or anything else. I prepared this blog post so that you have something to keep yourself occupied.
Our goal is to maintain connectivity in fully-dynamic query streams. In other words, we want to solve this problem:
The offline version, where you can cheat (aka, read all queries before answering, and answer everything at the very end of the program) has a very cool solution commonly known as Offline Dynamic Connectivity. This is enough in most CP problems, where this offline solution is rarely considered cheating.
Unfortunately, people in the academia are not chilling and consider this a kind of cheating (not exactly but w/e), so we need to respond to each query right after they are given. This is known as Online Dynamic Connectivity or HDLT Algorithm. I think it is as well-known as the offline algorithm in the sense that people know it exists, but many people just think it's way too hard.
The dynamic connectivity problem can be solved if the underlying graph is guaranteed to be a forest. Indeed, in this case, you can do a little bit more than just maintaining the connectivity, you can (and you have to) do the following:
link(u, v)
: Insert an edge $$$e = (u, v)$$$ into a forest, given that the graph remains a forest.cut(u, v)
: Delete an edge $$$e = (u, v)$$$ from a forest.find_root(u)
: Returns a root node of a component of $$$u$$$ — it is basically some random node that is consistently returned among the component (just like the find
operation in DSU)sum(u)
: Returns a sum of labels in a component of $$$u$$$ (yes, each vertex is associated with some labels)find_first(u)
: Returns the first node in a component of $$$u$$$ that satisfies some predicate (per labels).update(v, x)
: Update the label of $$$v$$$ to $$$x$$$This is pretty easy if you have a good top tree implementation by ecnerwala and know how to use it. But if you don't, then you have to use a sad BBST that maintains this weird cyclic Euler tree order and oh so confusing. But I'm not writing a top tree tutorial here, so let's not delve into these technicalities.
We do want to leverage the fact that a spanning forest can be maintained — so let's try to maintain a spanning forest and the set of back edges that currently do not belong to that spanning forest (but can be later if some edges are deleted)
Insertion is as easy as DSU — check if find_root(u) != find_root(v)
. If it is, add into a spanning forest with link
operation. Otherwise throw it into the list of back edges. Easy!
Or is it? Suppose the deleted edge are one of the back edges. Then you can just remove them from the set and call it a day.
Suppose not. Then use the cut
operation to remove the edge from the forest. After the removal of the edge, the component will be split into two parts. Some back edge may connect these two parts, and in that case, you have to insert them into the spanning forest again.
So you have to find a back edge that connects these two parts. This looks easy, can't you just use some segment trees in Euler tours? Well, the fact that trees are dynamic makes it very hard to solve this problem, especially given that nobody has any idea on how to tackle the problem this way.
And now we found the devil. We have to find the back edges between these two parts quickly.
Given that it seems hard to directly find the edge between two component, another approach to solve this is by scanning each edge at most once — then, although in each query you may spend a lot of time, the total time spent will be at most the number of (inserted) edges, which looks good.
Let's try this. For each vertex, label them with the number of back edges incident with it. After removing the edge $$$(u, v)$$$, let $$$T_u, T_v$$$ be the component connecting each vertices. We scan each $$$T_u$$$'s back edges (using find_first
) . If they connect $$$T_u$$$ and $$$T_v$$$, we are done!
Otherwise, we need to not scan them again, what should we do? Well, just throw them away! And get wrong answer. But we want to throw them away so badly, how should we do?
The kicker here is that we can assume $$$|T_u| \le |T_v|$$$ here. Anyway, the choice of $$$T_u$$$ was pretty random, and we can know the size of the subtree by storing as one of labels in top tree. And that means $$$|T_u| \le \frac{1}{2} (|T_u| + |T_v|)$$$, so we want to kick out those edges into this small area and somehow hope that we won't be kicking out each edges too much. Like, if this area is "really small" like $$$|T_u| = 1$$$, the back edge won't even exist. So we hope to scan each edge at most $$$\log n$$$ time, hopefully.
Here's our strategy. We want to push down each back edges if they ended up not connecting the tree again. Let's do it for only two layers. Suppose that we have some back edges that we don't want to scan. What should we do? If we only push down back edges, it's just a heap of random edges. So we also need to push down the tree as well. Let's push all tree edges of $$$T_u$$$ as well.
Then we have an algorithm. Let's say the upper layer is the main layer we are working, and the lower layer is the smaller layer where we throw stuff away. Here are the two invariants we are maintaining:
In the insertion case, just add the edge in the upper layer. Easy.
In the deletion case, let's analyze the cases:
So far so good, If nothing in lower layer fails, we already has an algorithm.
Let's generalize the above algorithm so that we can repeat the same argument even if the edge in the lower layer fails. We have
As an invariant, each edge set of forest in layer $$$i+1$$$ is a subset of that in layer $$$i$$$.
Insertion is still easy; add an edge in layer $$$0$$$.
In the deletion case, let's analyze the cases:
Now, every time we touch the edge, we increase their level. This level can't go over $$$\log n$$$, and top tree has $$$O(\log n)$$$ time, so we obtain a $$$O(\log^2 n)$$$ amortized time algorithm. Not easy, but very simple!
I have previously written about flows in these blogs: 1, 2, 3.
Minimum cuts are a very fascinating topic for me. There are a lot of maximum flow problems where there's a fairly simple chain of reductions leading to "you should use flow", but there are also a lot of flow problems where flow comes apparently out of nowhere. Particularly fascinating are those where it's more convenient to think in terms of minimum cut: they work, because minimum cut allows you to model partitioning a set in two, with some additional implication-like constraints. Here are a couple of problems with that nature:
There are also flow problems on graphs with special shapes where thinking in terms of cuts makes the flow easy to efficiently calculate.
Anyway, let's move on to the actual topic of the blog. I recently had a couple discussions on various Discord servers about some problems on minimum cuts. The problems were:
Both problems involve statements like "among all minimum cuts". I thought the problems were very interesting, and shared a common thread, so I decided to write them up in a blog. They also lead neatly into some discussion of more general and abstract theory.
Good evening everyone!
TL;DR: I am in Luxor preparing for ICPC WF and making some photos, and also I was offered to (and I will) give an interview to Shayan, please send your questions.
Thank you for taking part in our contest, we know leaves a lot to be desired. But we believe that you can find interesting among our problems.
Idea: Otomachi_Una
Obviously, a person at place $$$p$$$ will be kicked out if and only if $$$p \ge a_1$$$.
Therefore, the answer is $$$\min(n,a_1-1)$$$.
#include<bits/stdc++.h>
using namespace std;
#define ll long long
#define MP make_pair
mt19937 rnd(time(0));
int a[105];
void solve(){
int q,k,n;cin>>k>>q;
for(int i=1;i<=k;i++) cin>>a[i];
for(int i=1;i<=q;i++){
cin>>n;
cout<<min(a[1]-1,n)<<' ';
}
cout<<endl;
}
int main(){
ios::sync_with_stdio(false);
int _;cin>>_;
while(_--) solve();
}
Idea: Otomachi_Una
The number of pairs on each side should be the same.
For each color (in the following text, "this point" refers to the point someone got by playing a card with this color):
If you have both cards of this color in your hand, you will be able to get this point.
If Nene has both cards, you will not be able to get this point.
If you have only one card, you cannot get this point when Nene is using the following strategy:
When you play one of your paired cards, Nene also plays one of her paired cards; Otherwise, Nene will have the card with the same color. She can play it and get this point.
Therefore, the answer will be the amount of pairs in your hand.
#include<bits/stdc++.h>
using namespace std;
const int MAXN=4e5+5;
int cnt[MAXN];
void solve() {
int n,ans=0;
scanf("%d",&n);
fill(cnt+1,cnt+n+1,0);
for(int i=1,a;i<=n;++i) scanf("%d",&a),ans+=(++cnt[a]==2);
printf("%d\n",ans);
}
signed main() {
int T;
scanf("%d",&T);
while(T--) solve();
return 0;
}
Idea: Otomachi_Una
When $$$n=3$$$, the optimal matrix is the following:
1 2 3
2 2 3
3 3 3
The optimal matrix would be:
1 2 3 ... n
2 2 3 ... n
3 3 3 ... n
..... ... n
n n n n n n
Construction method:
for i in [n, n-1, ..., 1]:
set the i-th column to [1, 2, ..., n];
set the i-th row to [1, 2, ..., n];
This takes exactly $$$2n$$$ operations.
For the final matrix, we define $$$f(x)$$$ as the number of elements greater or equal to $$$x$$$. The sum of all elements in the matrix is $$$\sum_{i=1}^n f(i)$$$ because an element with value $$$x$$$ will be counted $$$x$$$ times in the formula before.
Now, we proof that $$$f(x) \le n^2-(x-1)^2$$$:
Let's rewrite the problem to make it a little simpler:
You have an $$$n\times n$$$ matrix. In each operation, you can paint exactly $$$x-1$$$ cells white and $$$n-(x-1)$$$ cells black in a row or a column. Proof that there will be at most $$$n^2-(x-1)^2$$$ black cells.
Try to strengthen the conclusion by stating that:
For any matrix of size $$$n\times m$$$, each operation can paint a row into $$$x$$$ white cells and $$$m-x$$$ black cells, or a column into $$$y$$$ white cells and $$$n-y$$$ black cells. No matter how we paint, the final matrix will have at most $$$nm-xy$$$ black cells.
We will prove this by induction.
If $$$x=m$$$ or $$$y=n$$$, the conclusion holds.
Otherwise, if the last operation is to paint a row, then this row has exactly $$$m-x$$$ black cells. And, by induction, other rows will contain at most $$$(n-1)m-x(y-1)$$$ black cells. Painting a column in the last step is similar.
Then, we have proven the conclusion above.
Since the construction above maximizes each $$$f(x)$$$, it is the optimal answer.
#include<bits/stdc++.h>
using namespace std;
#define ll long long
#define MP make_pair
mt19937 rnd(time(0));
void solve(){
int n;cin>>n;
int ans=0;
for(int i=1;i<=n;i++) ans+=(2*i-1)*i;
cout<<ans<<' '<<2*n<<endl;
for(int i=n;i>=1;i--){
cout<<"1 "<<i<<' ';
for(int j=1;j<=n;j++) cout<<j<<' ';cout<<endl;
cout<<"2 "<<i<<' ';
for(int j=1;j<=n;j++) cout<<j<<' ';cout<<endl;
}
}
int main(){
ios::sync_with_stdio(false);
int _;cin>>_;
while(_--) solve();
}
Idea: Otomachi_Una
What is the answer when $$$a_i = 0$$$?
When $$$a_i =0$$$, the sum can hit $$$n^2$$$ with making $$$a_i = n$$$ at last. Construction:
//! a function which sets a_1 ... a_k into k.
function solve(k):
if k is 1:
operate [1,1]
return
end if
solve(k-1);
for i in [k-2, ..., 1]:
operate [1,i] //! this sets a_1 ... a_i into 0
solve(i)
//! here, a should be [i, i, ..., i, i+1, i+2, ..., k-1, 0]
end for
//! here, a should be [1, 2, 3, ..., k-1, 0]
operate [1,k]
return
Here, solve(k)
will take about $$$2^k$$$ operations.
Since doing operation $$$[l,r]$$$ will make $$$a_l,\cdots,a_r \le r-l+1$$$, if for all $$$l\le i\le r$$$, $$$a_i$$$ is included in at least one of the operations and $$$a_{l-1},a_{r+1}$$$ are not, the optimal strategy will be setting $$$a_i = r-l+1$$$ for $$$i\in[l,r]$$$ using the construction above.
Finally, we can use DFS or DP to determine whether each element is included in operations.
The number of operations used will not exceed $$$2^n$$$.
#include<bits/stdc++.h>
using namespace std;
int n,a[20],cnt[20];
vector <array<int,2>> I;
void oper(int l,int r) {
fill(cnt,cnt+n+1,0);
for(int i=l;i<=r;++i) if(a[i]<=n) ++cnt[a[i]];
int mex=0;
while(cnt[mex]) ++mex;
for(int i=l;i<=r;++i) a[i]=mex;
I.push_back({l,r});
}
void build(int l,int r) {
if(l==r) {
if(a[l]) oper(l,r);
return ;
}
build(l,r-1);
if(a[r]!=r-l) oper(l,r),build(l,r-1);
}
void solve() {
scanf("%d",&n),I.clear(),memset(a,0,sizeof(a));
for(int i=0;i<n;++i) scanf("%d",&a[i]);
int cur=0,ans=0;
for(int s=0;s<(1<<n);++s) {
int tmp=0;
for(int l=0;l<n;++l) {
if(s&(1<<l)) {
int r=l;
while(r+1<n&&(s&(1<<(r+1)))) ++r;
tmp+=(r-l+1)*(r-l+1);
l=r;
} else tmp+=a[l];
}
if(tmp>ans) ans=tmp,cur=s;
}
for(int l=0;l<n;++l) if(cur&(1<<l)) {
int r=l;
while(r+1<n&&(cur&(1<<(r+1)))) ++r;
build(l,r),oper(l,r),l=r;
}
printf("%d %d\n",ans,(int)I.size());
for(auto i:I) printf("%d %d\n",i[0]+1,i[1]+1);
}
signed main() {
solve();
return 0;
}
idea: Otomachi_Una
If three consecutive monsters have energy level $$$0,x,y\ (x,y>0)$$$, the monster with energy lever $$$y$$$ will "die"(have energy level $$$0$$$) at last.
If four consecutive monsters have energy levels $$$0,x,y,z\ (x,y,z>0)$$$, what will happen to the monster $$$z$$$?
If four consecutive monsters have energy level $$$x,y,z,w\ (x,y,z,w>0)$$$, how many rounds of spells must be used to make at least one of these monsters die?
If four consecutive monsters have energy level $$$x,y,z,w\ (x,y,z,w>0)$$$ and they did not die after $$$t$$$ rounds of spells, then $$$y$$$ will receive at least $$$t$$$ points of damage, $$$z$$$ will receive at least $$$(t-1) +(t-2)+\cdots=O(t^2)$$$ of damage, and $$$w$$$ will receive at least $$$O(t^3)$$$ of damage.
That is to say, let $$$V=\max_{i=1}^n a_i$$$, after $$$O(\sqrt[3]{V})$$$ rounds, at least one of $$$x,y,z,w$$$ will die.
So, we can simulate the process by brute force until there are no four consecutive alive monsters, and then the problem is reduced to the one described in Hint 2.
If four consecutive monster have energy level $$$0,x,y,z\ (x,y,z>0)$$$, $$$x$$$ will remain alive, $$$y$$$ will die at last and sending $$$D=(y-x)+(y-2x)+\cdots+(y\bmod x)$$$ damage to $$$z$$$ before that. Therefore, $$$z$$$ will remain alive if and only if $$$z>D$$$.
The time complexity is $$$O(n\sqrt[3]{V})$$$.
Bonus: Actually, it can be shown that after $$$O(\sqrt[k]{V})$$$ rounds, there will be no $$$k$$$ consecutive alive monsters. Making $$$k$$$ bigger than $$$3$$$ can further reduce the time complexity, but it will be harder to implement and optimize little on actual performance.
#include<bits/stdc++.h>
using namespace std;
#define ll long long
#define MP make_pair
mt19937 rnd(time(0));
const int MAXN=2e5+5;
int n,a[MAXN];bool b[MAXN];
bool check(){
a[n+1]=a[1],a[n+2]=a[2],a[n+3]=a[3];
for(int i=1;i<=n;i++)
if(a[i]&&a[i+1]&&a[i+2]&&a[i+3]) return true;
return false;
}
void solve(){
cin>>n;
for(int i=1;i<=n;i++) cin>>a[i];
if(n==2){
while(a[1]&&a[2]){
a[2]=max(0,a[2]-a[1]);
a[1]=max(0,a[1]-a[2]);
}
b[1]=(a[1]>0);b[2]=(a[2]>0);
}else if(n==3){
while(a[1]&&a[2]&&a[3]){
a[2]=max(0,a[2]-a[1]);
a[3]=max(0,a[3]-a[2]);
a[1]=max(0,a[1]-a[3]);
}
b[1]=(!a[3]&&a[1]);
b[2]=(!a[1]&&a[2]);
b[3]=(!a[2]&&a[3]);
}else{
while(check()){
for(int i=1;i<=n;i++) a[i%n+1]=max(0,a[i%n+1]-a[i]);
}
for(int i=1;i<=n;i++) b[i]=false;
auto attack=[&](ll x,ll y){
ll k=x/y;
return (2*x-(k+1)*y)*k/2;
};
for(int p=1;p<=n;p++)
if(a[p]&&a[p%n+1]) a[p%n+1]=max(0,a[p%n+1]-a[p]);
else break;
for(int i=1;i<=n;i++) if(!a[i]&&a[i%n+1]){
b[i%n+1]=true;
b[(i+2)%n+1]=(a[(i+2)%n+1]>attack(a[(i+1)%n+1],a[i%n+1]));
}
}
int cnt=0;
for(int i=0;i<=n;i++) if(b[i]) cnt++;
cout<<cnt<<endl;
for(int i=1;i<=n;i++) if(b[i]) cout<<i<<' ';
cout<<endl;
}
int main(){
ios::sync_with_stdio(false);
int _;cin>>_;
while(_--) solve();
}
idea: Otomachi_Una
Two people $$$i$$$ and $$$j\ (i<j)$$$ can pass the ball to each other directly if and only if $$$[i+l_i,i+r_i]\cap[j-r_j,j-l_j]\neq \varnothing$$$
According to the hint above, we can build the following graph:
That is, assuming vertices $$$1$$$ to $$$n$$$ are players, vertices $$$n+1$$$ to $$$2n$$$ are temporary spots, and player $$$i$$$ links to all the spots where his/her arm can reach.
Then, the answer will be the number of connected components in this graph which contains at least one vertice with an index less than or equal to $$$n$$$.
But there's still a little problem with the solution. For two players $$$i, j\ (i<j)$$$ satisfying $$$[i+l_i,i+r_i]\cap[j+l_i,j+r_i]\neq \varnothing$$$ (that is, both players reaching out their "right arm"s), they are incorrectly counted as connected.
To solve that, we can delete all the vertices $$$n+x$$$ such that $$$\forall i,\ x\not\in[i-r_i,i-l_i]$$$ or $$$\forall i,\ x\not\in[i+l_i,i+r_i]$$$ (that is, nobody's left/right arm can reach $$$x$$$). Finding such $$$x$$$ can be done easily in $$$O(n)$$$.
The last issue is, the graph contains $$$O(n^2)$$$ edges. but since we only care about connectivity, operation "link $$$x$$$ to $$$[y,z]$$$" can be changed to "link $$$x$$$ to $$$y$$$, and link $$$i$$$ to $$$i+1$$$ for all $$$i$$$ in $$$[y,z-1]$$$". After that and removing multiple edges, the number of edges is reduced to $$$O(n)$$$.
Finally, counting connected components in a graph can be easily done in $$$O(n)$$$, so the time complexity is $$$O(n)$$$.
#include<bits/stdc++.h>
using namespace std;
#define ll long long
#define MP make_pair
const int MAXN=2e6+5;
int n,tot,le[MAXN],ri[MAXN];
int fa[MAXN<<1],s[MAXN],t[MAXN],pre[MAXN],suf[MAXN];
inline int find(int x){
while(x^fa[x]) x=fa[x]=fa[fa[x]];
return x;
}
void solve(){
cin>>n;
for(int i=1;i<=2*n+1;i++) fa[i]=i;
for(int i=0;i<=n+1;i++) s[i]=t[i]=pre[i]=suf[i]=0;
for(int i=1;i<=n;i++){
cin>>le[i]>>ri[i];
s[max(1,i-ri[i])]++;s[max(0,i-le[i])+1]--;
t[min(n+1,i+le[i])]++;t[min(n,i+ri[i])+1]--;
}
tot=n;
for(int i=1;i<=n;i++){
s[i]+=s[i-1],t[i]+=t[i-1];
if(s[i]&&t[i]) suf[i]=pre[i]=++tot;
}
suf[n+1]=0;
for(int i=1;i<=n;i++) pre[i]=(pre[i]?pre[i]:pre[i-1]);
for(int i=n;i>=1;i--) suf[i]=(suf[i]?suf[i]:suf[i+1]);
for(int i=1;i<=n;i++){
int l=max(1,i-ri[i]),r=max(0,i-le[i]);
if(l<=r){
l=suf[l],r=pre[r];
if(l&&r&&l<=r) for(int i=find(l);i<r;i=find(i)) fa[i]=i+1;
}
l=min(n+1,i+le[i]),r=min(n,i+ri[i]);
if(l<=r){
l=suf[l],r=pre[r];
if(l&&r&&l<=r) for(int i=find(l);i<r;i=find(i)) fa[i]=i+1;
}
}
for(int i=1;i<=n;i++){
int l=max(1,i-ri[i]),r=max(0,i-le[i]);
if(l<=r){
l=suf[l],r=pre[r];
if(l&&r&&l<=r) fa[find(i)]=find(l);
}
l=min(n+1,i+le[i]),r=min(n,i+ri[i]);
if(l<=r){
l=suf[l],r=pre[r];
if(l&&r&&l<=r) fa[find(i)]=find(l);
}
}
int ans=0;
for(int i=1;i<=tot;i++) if(fa[i]==i) ans++;
cout<<ans<<'\n';
}
int main(){
ios::sync_with_stdio(false);
// freopen("Otomachi_Una.in","r",stdin);
// freopen("Otomachi_Una.out","w",stdout);
int _;cin>>_;
while(_--) solve();
return 0;
}
Name |
---|