I have been trying to solve this question with MO's algorithm. but My solution was getting TLE. Please somebody share an approach, if it is possible to solve with MO's algorithm . Link to problem : http://mirror.codeforces.com/contest/588/problem/E
# | User | Rating |
---|---|---|
1 | tourist | 3985 |
2 | jiangly | 3741 |
3 | jqdai0815 | 3682 |
4 | Benq | 3529 |
5 | orzdevinwang | 3526 |
6 | ksun48 | 3489 |
7 | Radewoosh | 3483 |
8 | Kevin114514 | 3443 |
9 | ecnerwala | 3392 |
9 | Um_nik | 3392 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
2 | atcoder_official | 163 |
4 | maomao90 | 162 |
5 | adamant | 158 |
5 | -is-this-fft- | 158 |
7 | awoo | 156 |
8 | djm03178 | 154 |
8 | TheScrasse | 154 |
10 | Dominater069 | 153 |
I have been trying to solve this question with MO's algorithm. but My solution was getting TLE. Please somebody share an approach, if it is possible to solve with MO's algorithm . Link to problem : http://mirror.codeforces.com/contest/588/problem/E
Name |
---|
Auto comment: topic has been updated by papa-ka-para (previous revision, new revision, compare).
Keeping the set of people on the current path in a
set<int>
or similar structure will have a time complexity, which is unlikely to pass. In a set it is for inserting, deleting and finding the minimal a elements. However you insert and delete more often than you find the minimal elements.There exists a data structure which lets you insert and delete elements in , while taking to find the minimal elements. For this, divide the range of legal values into equal blocks and for each block keep a hash table containing all the added elements in that block. To insert or delete, simply access the relevant hash table, to find minimal, iterate through all possible elements while skipping the ranges where the hash table is empty. Using this, the complexity is
It is also worth noting that in the case of this problem, after you flatten the tree into an array, instead of dividing into blocks of cities, you should divide into blocks of people, otherwise Mo's algorithm won't optimize as intended.
This is still quite slow, the constant is large and hash-tables only support on average, not every time. I would prefer to use a method that uses time. Centroid decomposition works really well here, HLD is not bad either.
Hi, sorry for late reply.
I learnt something new, that you mentioned in second paragrah, and third paragraph. Thank you for your reply.
I will try it with centroid decomposition.