Why you can not do path compression in DSU with rollbacks? Why is time O(log(n)) instead of O(α(n))? I wrote DSU with path compression and rollbacks but it was slower than DSU without path compression.
№ | Пользователь | Рейтинг |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
Страны | Города | Организации | Всё → |
№ | Пользователь | Вклад |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
9 | nor | 153 |
Why you can not do path compression in DSU with rollbacks? Why is time O(log(n)) instead of O(α(n))? I wrote DSU with path compression and rollbacks but it was slower than DSU without path compression.
Название |
---|
You can, no one will stop you. The problem is that the complexity is $$$O(n\alpha(n))$$$ in total. One call to path compression can take $$$\Theta(\log n)$$$ time. So suppose you have a union operation which causes that, then you have to spend $$$\Theta(\log n)$$$ time. Then you need to revert that, again in $$$\Theta(\log n)$$$ time. Then you have to do it again and so on. The complexity will be $$$O(n\log n)$$$ with or without path compression, but with path compression you do a lot of unnecessary operations, causing larger constant factor.
Can you describe how to do path compression in DSU with rollback ? i've heard many people said that we couldn't do path compression if we want to rollback
Without rollback, your dsu keep merging the nodes and only query for its ancestor after having path compression. By keep merging a long chain of nodes, it is just $$$O(log(n)) \rightarrow O(\alpha(n))$$$ amortized.
By having rollback, however for long chain of nodes to be separate all over again, it is might upto $$$O(log(n))$$$
edited the wrong complexity
That is if you only use path compression. But if you want $$$\alpha(n)$$$ amortised complexity without rollback, you are using both compression and join-by-size. Then what Maksim1744 explained happens.
As for implementing general rollback, for a data structure with a guarantee on the worst-case complexity of operations, you can just store in a vector which fields changed and which values they had before the change, for every operation done.
As mango_lassi already mentioned, you can do rollbacks for any data structure by storing all values before the change. For DSU with path compression it would look like this:
That's not optimal in terms of history.size(), but this is just an example. For instance, you can save only $$$(2, u, v)$$$ in unite and then restore
parent[u] = u
,rank[v] -= rank[u]
Also I said that you could, I didn't say that you should. Having path compression in DSU with rollbacks doesn't change complexity, it's still $$$O(\log n)$$$
As I remember, most amortized complexity are broken by doing roll back operations:
Amortization kinda like: "Oh, so the value of this function might not reduce more. Observation that the complexity is likely to be reduce each time you perform operations close to linear or something. Therefore its complexity must amortized to something smaller"
Roll back: You have to do the work that you have done again. Or even redo and undo over and over for several times. And by going back, you can see that the average complexity on each operation may increase again.
Say the
vector<>
: Each time you extend it bypush_back()
,emplace_back
,resize
or something. It will delete the old $$$2^k$$$ data and copy it to bigger $$$2^{k+1}$$$ data size. It will doing it about $$$O(2^0 + 2^1 + \dots 2^{\lfloor log_2(n) \rfloor}) = O(2^{\lfloor log_2(n) \rfloor + 1}) - 1) = O(n)$$$. But once you add such roll back operation, and say to reduce its size by smaller $$$2^{k-1}, 2^{k-2}, \dots, 2^1, 0$$$. Repeating extending and roll-backing and the complexity might be about $$$O(2^k \times n) = O(n^2)$$$ hence the amortized complexity breakHence it this cases, it is also amortized $$$O(\alpha(n))$$$ on average but break during roll back operations. You can still achieve $$$O(\alpha(n))$$$ if you dont do any roll back operations, which is faster. However it is not likely rolling back with it is also faster.