Can anyone explain the centroid decomposition solution?
# | User | Rating |
---|---|---|
1 | tourist | 3985 |
2 | jiangly | 3814 |
3 | jqdai0815 | 3682 |
4 | Benq | 3529 |
5 | orzdevinwang | 3526 |
6 | ksun48 | 3517 |
7 | Radewoosh | 3410 |
8 | hos.lyric | 3399 |
9 | ecnerwala | 3392 |
9 | Um_nik | 3392 |
# | User | Contrib. |
---|---|---|
1 | cry | 169 |
2 | maomao90 | 162 |
2 | Um_nik | 162 |
4 | atcoder_official | 161 |
5 | djm03178 | 158 |
6 | -is-this-fft- | 157 |
7 | adamant | 155 |
8 | awoo | 154 |
8 | Dominater069 | 154 |
10 | luogu_official | 150 |
Can anyone explain the centroid decomposition solution?
Name |
---|
Let's fix a current centroid (root).
So, if path v - u with length k exists and path contains vertex root, we can split it to the two parts: from vertex v to the root and from root to vertex u. Note: vertexes v and u must be in different subtrees of root.
Next, we process every subtreee of root from left to right (or another fixed order) by simple dfs(vertex, distance, edgecount), and collect map<distance, edgeCount> for each subtree (local), and one global map. Suppose now we are in vertex u, with distance D and edgeCount E, we should check, that global map contains k - D key, and update answer if can (answer = min(answer, E + global[k - D]), and update min in local map. After subtree processing, we merge local map to the global.
Thanks. This works because for the original centroid we would have to process N vertices in our DFS. And after splitting the tree, in the subsequent DFSs the number of nodes to process gets half. And therefore the complexity would be O(NlogN) excluding the map log factor. Right?
I think it's enough to use an array of size k instead of a map if we are careful about reusing it. This way we can get O(N logN) without any hashing (if one wanted to use a hash map to remove the log factor from using a map). This is the official solution and it can be found here.
Where can we submit the problem?
http://wcipeg.com/problem/ioi1112