Why you can not do path compression in DSU with rollbacks? Why is time O(log(n)) instead of O(α(n))? I wrote DSU with path compression and rollbacks but it was slower than DSU without path compression.
# | User | Rating |
---|---|---|
1 | tourist | 3985 |
2 | jiangly | 3814 |
3 | jqdai0815 | 3682 |
4 | Benq | 3529 |
5 | orzdevinwang | 3526 |
6 | ksun48 | 3517 |
7 | Radewoosh | 3410 |
8 | hos.lyric | 3399 |
9 | ecnerwala | 3392 |
9 | Um_nik | 3392 |
# | User | Contrib. |
---|---|---|
1 | cry | 169 |
2 | maomao90 | 162 |
2 | Um_nik | 162 |
4 | atcoder_official | 161 |
5 | djm03178 | 158 |
6 | -is-this-fft- | 157 |
7 | adamant | 155 |
8 | awoo | 154 |
8 | Dominater069 | 154 |
10 | luogu_official | 150 |
Why you can not do path compression in DSU with rollbacks? Why is time O(log(n)) instead of O(α(n))? I wrote DSU with path compression and rollbacks but it was slower than DSU without path compression.
Name |
---|
You can, no one will stop you. The problem is that the complexity is $$$O(n\alpha(n))$$$ in total. One call to path compression can take $$$\Theta(\log n)$$$ time. So suppose you have a union operation which causes that, then you have to spend $$$\Theta(\log n)$$$ time. Then you need to revert that, again in $$$\Theta(\log n)$$$ time. Then you have to do it again and so on. The complexity will be $$$O(n\log n)$$$ with or without path compression, but with path compression you do a lot of unnecessary operations, causing larger constant factor.
Can you describe how to do path compression in DSU with rollback ? i've heard many people said that we couldn't do path compression if we want to rollback
Without rollback, your dsu keep merging the nodes and only query for its ancestor after having path compression. By keep merging a long chain of nodes, it is just $$$O(log(n)) \rightarrow O(\alpha(n))$$$ amortized.
By having rollback, however for long chain of nodes to be separate all over again, it is might upto $$$O(log(n))$$$
edited the wrong complexity
That is if you only use path compression. But if you want $$$\alpha(n)$$$ amortised complexity without rollback, you are using both compression and join-by-size. Then what Maksim1744 explained happens.
As for implementing general rollback, for a data structure with a guarantee on the worst-case complexity of operations, you can just store in a vector which fields changed and which values they had before the change, for every operation done.
As mango_lassi already mentioned, you can do rollbacks for any data structure by storing all values before the change. For DSU with path compression it would look like this:
That's not optimal in terms of history.size(), but this is just an example. For instance, you can save only $$$(2, u, v)$$$ in unite and then restore
parent[u] = u
,rank[v] -= rank[u]
Also I said that you could, I didn't say that you should. Having path compression in DSU with rollbacks doesn't change complexity, it's still $$$O(\log n)$$$
As I remember, most amortized complexity are broken by doing roll back operations:
Amortization kinda like: "Oh, so the value of this function might not reduce more. Observation that the complexity is likely to be reduce each time you perform operations close to linear or something. Therefore its complexity must amortized to something smaller"
Roll back: You have to do the work that you have done again. Or even redo and undo over and over for several times. And by going back, you can see that the average complexity on each operation may increase again.
Say the
vector<>
: Each time you extend it bypush_back()
,emplace_back
,resize
or something. It will delete the old $$$2^k$$$ data and copy it to bigger $$$2^{k+1}$$$ data size. It will doing it about $$$O(2^0 + 2^1 + \dots 2^{\lfloor log_2(n) \rfloor}) = O(2^{\lfloor log_2(n) \rfloor + 1}) - 1) = O(n)$$$. But once you add such roll back operation, and say to reduce its size by smaller $$$2^{k-1}, 2^{k-2}, \dots, 2^1, 0$$$. Repeating extending and roll-backing and the complexity might be about $$$O(2^k \times n) = O(n^2)$$$ hence the amortized complexity breakHence it this cases, it is also amortized $$$O(\alpha(n))$$$ on average but break during roll back operations. You can still achieve $$$O(\alpha(n))$$$ if you dont do any roll back operations, which is faster. However it is not likely rolling back with it is also faster.