TL;DR
The Policy Hash Table has 3-6x faster insertion/deletion and 4-10x increase for writes/reads. As far as I can tell, there are no downsides. The policy hash table (specifically the open-addressing version), beats out unordered_map in all my benchmarks.
Background
I've often been irritated by how slow unordered_map is in C++. Too often, I have something that runs fast enough in terms of complexity, but the constant factor from unordered_map slows down the solution too much. Yesterday though, after using the useful order statistics tree from https://mirror.codeforces.com/blog/entry/11080, I was curious if there were any other useful data structures hiding in the Policy STL. And lo and behold, I found a hash table.
Benchmarks
Well, enough backstory, let's look at some numbers. All benchmarks below are compiled with C++14 -O2.
unordered_maplinear insertion: 0.689846
cc_hash_tablelinear insertion: 0.408233
gp_hash_tablelinear insertion: 0.256131
unordered_maplinear read/write: 1.69783
cc_hash_tablelinear read/write: 0.202474
gp_hash_tablelinear read/write: 0.26842
unordered_maprandom insertion: 2.90184
cc_hash_tablerandom insertion: 3.15129
gp_hash_tablerandom insertion: 0.56553
unordered_maprandom read/write: 2.02336
cc_hash_tablerandom read/write: 0.333415
gp_hash_tablerandom read/write: 0.403486
While for linear insertions, the policy hash table gives modest improvements, the policy hash table blows the unordered_map out of the water when it comes to reads/writes. These are order of magnitude improvements that make hash tables usable when they previously weren't.
Example
Benchmarks of course, don't always reflect the real world. So here's an example of it allowing a solution to be accepted that TLE's with unordered_map.
Example problem (5000 ms time limit): http://mirror.codeforces.com/contest/264/problem/C
Solution with unordered_map: http://mirror.codeforces.com/contest/264/submission/40542899 (TLE on test case 8)
Solution with policy hash table directly substituted in: http://mirror.codeforces.com/contest/264/submission/40573491 (TLE on test case 26)
Solution with unordered_map, rewritten to not use clears: http://mirror.codeforces.com/contest/264/submission/40590437 (TLE on test case 26) Solution with policy hash table and rewritten to not use clears: http://mirror.codeforces.com/contest/264/submission/40574196 (AC with max time of 3180 ms)
Usage
To use this data structure:
#include <ext/pb_ds/assoc_container.hpp>
using namespace __gnu_pbds;
gp_hash_table<int, int> table;
From there, the API seems almost exactly the same.
Defeating Anti-Hash tests
One weakness of hash tables is that mean people can find hash collisions offline and blow up the complexity of your hashmap. In my opinion, the easiest way of solving this is below. There's no need to define your own custom hash function.
const int RANDOM = chrono::high_resolution_clock::now().time_since_epoch().count();
struct chash {
int operator()(int x) { return hash<int>{}(x ^ RANDOM); }
};
gp_hash_table<key, int, chash> table;
Why is it so much faster?
See my comment here: http://mirror.codeforces.com/blog/entry/60737?#comment-446357
Thanks to adamant for his post that revealed to me the existence of policy data structures, and thanks to ed1d1a8d for the discussion.
PS: In other posts for unordered_map, I've seen people claim that reserve and max_load_factor could increase performance drastically. They didn't seem to do much for me. However, if you want to do something similar for these hash tables, check out the example here: https://ideone.com/51t55C
Code for the benchmarks can be found here: https://ideone.com/ZkNMbH
EDIT: I realized that gp_hash_table is the way to go, not cc_hash_table. gp_hash_table sacrifices ~10% reading/writing speed to gain 3-6x in insertion/deletion/clearing. I updated the post to reflect the new numbers.