↵
### Standard segtree↵
This is how a standard segtree looks like. You can set a single element, and query for ranges. It's nice and simple, and I think it's a great implementation.↵
↵
~~~~~↵
int N;↵
int seg[2 * MAXN];↵
↵
void modify(int p, int val) {↵
for (seg[p += N] = val; p > 0; p >>= 1)↵
seg[p >> 1] = seg[p] + seg[p ^ 1];↵
}↵
↵
int query(int l, int r) {↵
int res = 0;↵
for (l += N, r += N; l < r; l >>= 1, r >>= 1) {↵
if (l & 1)↵
res += seg[l++];↵
if (r & 1)↵
res += seg[--r];↵
}↵
return res;↵
}↵
~~~~~↵
↵
### Dynamic Segtree ↵
However, say your underlying array had 1e9 possible locations, but it only contained 1e5 elements. For example, take a look at this [post](https://mirror.codeforces.com/blog/entry/19080). Obviously, you can't store all 2e9 elements in your segtree, so what should you do? Here's one solution, replace the array with a hash table. However, as [user:adamant,2018-07-26] [mentions](https://mirror.codeforces.com/blog/entry/19080), unordered_map has too much overhead. We'll be benchmarking against the dynamic segtree provided [here](https://mirror.codeforces.com/blog/entry/19080?#comment-239925). I'll also be using a custom hash function. So to be clear, the implementation now looks like:↵
↵
<spoiler summary="Code">↵
~~~~~↵
int N;↵
unordered_map<int, int, chash> seg;↵
↵
void modify(int p, int val) {↵
for (seg[p += N] = val; p > 0; p >>= 1)↵
seg[p >> 1] = seg[p] + seg[p ^ 1];↵
}↵
↵
int query(int l, int r) {↵
int res = 0;↵
for (l += N, r += N; l < r; l >>= 1, r >>= 1) {↵
if (l & 1)↵
res += seg[l++];↵
if (r & 1)↵
res += seg[--r];↵
}↵
return res;↵
}↵
~~~~~↵
</spoiler>↵
↵
And benchmarking it with 1e5 random insertions and 1e5 random queries.↵
↵
~~~~~↵
pointer: 0.171485↵
unordered_map: 2.0646↵
~~~~~↵
↵
Wow. The unordered_map is nearly 12x slower. That's not really feasible for a lot of contests. What if we replace it with a policy hash table, though?↵
↵
<spoiler summary="Code">↵
~~~~~~~~~~↵
int N;↵
unordered_map<int, int, chash> seg;↵
↵
void modify(int p, int val) {↵
for (seg[p += N] = val; p > 0; p >>= 1)↵
seg[p >> 1] = seg[p] + seg[p ^ 1];↵
}↵
↵
int query(int l, int r) {↵
int res = 0;↵
for (l += N, r += N; l < r; l >>= 1, r >>= 1) {↵
if (l & 1)↵
res += seg[l++];↵
if (r & 1)↵
res += seg[--r];↵
}↵
return res;↵
}↵
~~~~~~~~~~↵
↵
</spoiler>↵
↵
~~~~~↵
pointer: 0.202186↵
policy hash table: 0.384312↵
~~~~~↵
↵
Only a 2x decrease in speed. That's already very feasible. However, one might notice that since maps in C++ create elements if you try to access a key that doesn't exist, we're creating a lot of useless elements. Thus, we can simply wrap a check to make sure the element is in the array before we try to access it.↵
↵
~~~~~↵
gp_hash_table<ll, ll, chash> seg;↵
↵
ll get(ll x) { return (seg.find(x) == seg.end()) ? 0 : seg[x]; }↵
void modify(ll p, ll val) {↵
for (seg[p += N] = val; p > 0; p >>= 1) {↵
seg[p >> 1] = get(p) + get(p ^ 1);↵
}↵
}↵
↵
ll query(ll l, ll r) {↵
ll res = 0;↵
for (l += N, r += N; l < r; l >>= 1, r >>= 1) {↵
if (l & 1)↵
res += get(l++);↵
if (r & 1)↵
res += get(--r);↵
}↵
return res;↵
}↵
~~~~~↵
Results:↵
↵
~~~~~↵
pointer: 0.208667↵
policy hash table: 0.310953↵
↵
Only 1.5x slower!↵
↵
One more thing. Although it may seem like this implementation is just 1.5x slower, that's a little bit misleading. If we break down the numbers between insertion/querying, we see this:↵
↵
↵
~~~~~↵
policy hash table insertion: 0.24153↵
pointer insertion: 0.048696↵
~~~~~↵
↵
~~~~~↵
policy hash table queries: 0.08687↵
pointer queries: 0.133678↵
~~~~~↵
↵
So the custom implementation is actually 50% slower than the policy hash table one for queries. Not too bad.↵
↵
↵
↵
↵
↵
↵
↵
↵
↵