I have read this on various posts on Codeforces (some very old, some a little more recent) that Amortized Complexity algorithms lose this property when we perform rollbacks. An example: [Comment by Retrograd here](http://mirror.codeforces.com/blog/entry/15296?#comment-310833).↵
↵
Rollbacks here means storing changes in some data structure and reversing back to the previous state until required.↵
↵
While I would love a general answer, I am specifically asking for DSU Rollbacks with Path Compression. Is there any particular reason why it should not work? Intuitively I'm performing just as many operations as I did to reach the new state to go back to the old state.↵
↵
Also some might state that between a Union and the time we rollback to it, there could be a 100 odd queries which would cause path compression. We lose all this data after rollback and have to recompute them for further queries. However I am not sure if this realllly messes up the complexity. Every query is still the same complexity?! Notice that this is rollbacks and not persistent DSU.↵
↵
While I know union by size vs path compression is just <b>NlogN vs N*alpha</b>, there's this particular problem that had me interested:↵
[Codechef — Chef and Graph Queries](https://www.codechef.com/problems/GERALD07). I had implemented an <b>NRootNLogN</b> solution with Mo's and DSU Rollbacks with Union by Size which TLE's despite many small optimizations. On the other hand the [tester's solution](https://s3.amazonaws.com/codechef_shared/download/Solutions/2014/March/Tester/GERALD07.cpp) does path compression with rollbacks. (Note I am aware a Mos+DSU solution without any rollbacks also exists, and it is quite beautiful but the question isn't about that). ↵
↵
I am attaching a rough implementation just for the idea:↵
↵
~~~~~↵
int find(int x)↵
{↵
if(par[x]==x) return x;↵
stk.push(mp(x, par[x]));↵
par[x]=find(par[x]);↵
return par[x];↵
}↵
void uni(int x, int y)↵
{↵
x=find(x); y=find(y);↵
if(sz[y]>sz[x])↵
{↵
swap(x, y);↵
}↵
sz[x]+=sz[y];↵
sz[y]=0;↵
sizes.push(stk.size());↵
stk.push(mp(y, par[y])); //also push the y component size here↵
par[y]=x;↵
}↵
void rollback()↵
{↵
int lstsz=sizes.top();↵
stk.pop();↵
while(stk.size()>lstsz)↵
{↵
par[stk.top().first]=stk.top().second;↵
stk.pop(); ↵
}//also adjust the y component size here↵
}↵
~~~~~↵
<spoiler summary="Code">↵
...↵
</spoiler>↵
↵
To sum it up, if we are really just reversing X operations, why would the complexity be more than 2*X, why do Amortized Algorithms fail on rollbacks. Thanks.
↵
Rollbacks here means storing changes in some data structure and reversing back to the previous state until required.↵
↵
While I would love a general answer, I am specifically asking for DSU Rollbacks with Path Compression. Is there any particular reason why it should not work? Intuitively I'm performing just as many operations as I did to reach the new state to go back to the old state.↵
↵
Also some might state that between a Union and the time we rollback to it, there could be a 100 odd queries which would cause path compression. We lose all this data after rollback and have to recompute them for further queries. However I am not sure if this realllly messes up the complexity. Every query is still the same complexity?! Notice that this is rollbacks and not persistent DSU.↵
↵
While I know union by size vs path compression is just <b>NlogN vs N*alpha</b>, there's this particular problem that had me interested:↵
[Codechef — Chef and Graph Queries](https://www.codechef.com/problems/GERALD07). I had implemented an <b>NRootNLogN</b> solution with Mo's and DSU Rollbacks with Union by Size which TLE's despite many small optimizations. On the other hand the [tester's solution](https://s3.amazonaws.com/codechef_shared/download/Solutions/2014/March/Tester/GERALD07.cpp) does path compression with rollbacks. (Note I am aware a Mos+DSU solution without any rollbacks also exists, and it is quite beautiful but the question isn't about that). ↵
↵
I am attaching a rough implementation just for the idea:↵
↵
int find(int x)↵
{↵
if(par[x]==x) return x;↵
stk.push(mp(x, par[x]));↵
par[x]=find(par[x]);↵
return par[x];↵
}↵
void uni(int x, int y)↵
{↵
x=find(x); y=find(y);↵
if(sz[y]>sz[x])↵
{↵
swap(x, y);↵
}↵
sz[x]+=sz[y];↵
sz[y]=0;↵
sizes.push(stk.size());↵
stk.push(mp(y, par[y])); //also push the y component size here↵
par[y]=x;↵
}↵
void rollback()↵
{↵
int lstsz=sizes.top();↵
stk.pop();↵
while(stk.size()>lstsz)↵
{↵
par[stk.top().first]=stk.top().second;↵
stk.pop(); ↵
}//also adjust the y component size here↵
}↵
~~~~~↵
...↵
</spoiler>↵
↵
To sum it up, if we are really just reversing X operations, why would the complexity be more than 2*X, why do Amortized Algorithms fail on rollbacks. Thanks.