190829557 This is my submission Which gives me TLE,I think Using Map is more preferable than unordered_map as worst Time Complexity of Unordered_map is O(n^2) as compared to O(logn) for Map!!
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
9 | nor | 153 |
190829557 This is my submission Which gives me TLE,I think Using Map is more preferable than unordered_map as worst Time Complexity of Unordered_map is O(n^2) as compared to O(logn) for Map!!
Name |
---|
Rehash your hash-table.
m.rehash(n);
will allocate in memory space forn
elements(likevector::resize(n)
)This still doesn't pass the time limit in the mentioned problem (191032622). The correct solution is to use a custom hash function that makes
unordered_map
always run in $$$O(1)$$$ (191032878). Take a look at this blog post by neal.It's not quite that simple.
A map is a binary search tree. A binary search tree is a binary tree where every node in each node's left subtree is less than it, and the right subtree greater than it. Updates to a binary search tree are O(log n) because a well-implemented BST will cut the tree in half for each level of depth, akin to binary search, and adding elements is constant time (just make a new node and edge).
An unordered_map is a hash table. Hash tables can't be looked at in any meaningful order. They work by running what is called a hash function on inserted values to generate a number, which is used to index the stored value. They can be added to and accessed in O(1) time. The complexity of making an unordered_map of n elements is not actually O(n^2), but O(n) amortized. The reason O(n^2) is theoretically possible is due to something called a hash collision, which is where a hash function generates the same value for 2 elements inserted into a map. When this happens, a rehash has to performed, which takes O(n) time. If a rehash happens every time you insert into an unordered_map, then yes, it would take O(n^2) time. In practice, this is basically impossible unless the dataset inserted into the hash table was specifically designed to mess up the hash function. This is why we call the time complexity O(n) amortized.
Note: BST maps have amortized time complexity too, actually. To ensure fast lookup times, they occasionally have to balance themselves, which essentially rebuilds the tree. This takes O(n) time.
Unordered_map is O(1) on average, not amortized.
However, map is O(log n) amortized.
You should look up the difference between the two but basically the main difference is that amortized means it is guaranteed but average does not. So it is guaranteed that in the worst case map has O(log n) time complexity but it is not guaranteed that unordered_map has O(1) time complexity — which is why O(n^2) blowups exist.