### Bruteforceman's blog

By Bruteforceman, history, 3 weeks ago,

Segment trees are not as memory efficient as fenwick trees which often becomes the bottleneck of many problems. There has been attempts at reducing the space required to build a segment tree (e.g. iterative implementation takes $2N$ space instead of $4N$ space). But there's a very simple solution to this that can asymptotically improve the space complexity (sorry if this is already well-known).

When building the segment tree, we can simply stop the recursion when $r - l \leq \lg n$. This means the size of the tree is $O \left( \frac{n}{\lg n} \right)$ instead of $O(n)$.

Fortunately we can still answer queries (or perform updates) in $O(\lg n)$ time for most problems. For the internal nodes, we can simply collect the contribution of that node just like usual. For the leaf nodes with range $[l, r]$, we will have to scan through the original array in $a[l \cdots r]$ with a loop and calculate their contribution to the query. Since $r - l \leq \lg n$, the loop still takes $O(\lg n)$ time. Any range update/query visits only two leaf nodes, so the time required to process the leaf nodes are also $O(\lg n)$.

This trick may not be that useful if we need only one segment tree in our problem. However, many problems require building multiple segment trees. For example, one common pattern is to create a segment tree for each bit position. Applying this trick to such problems can reduce the space complexity from $O(n \lg n)$ to $O(n)$.

Example Problem:

• +215

 » 3 weeks ago, # |   +11 Great blog! Can you please share some problems where we construct a segment tree for each bit? Thank You!
•  » » 3 weeks ago, # ^ |   +11
•  » » » 3 weeks ago, # ^ |   +8 Nice, Thank You!
•  » » » 3 weeks ago, # ^ |   0 Thanks. I'll add this example to the blog!
 » 3 weeks ago, # |   +10 If you implement seg trees with this midpoint https://mirror.codeforces.com/blog/entry/112755 then if you loop over the seg tree in BFS order, the lengths of node segments are non-increasing: test#include using namespace std; int get_midpoint(int l, int r) { //[l, r) int pow_2 = 1 << __lg(r - l); // bit_floor(unsigned(r-l)); return min(l + pow_2, r - pow_2 / 2); } int main() { for (int n = 1; n < 100; n++) { cout << "n: " << n << endl; queue> q; q.push({1, 0, n}); int prev_len = n; while (!empty(q)) { auto [u, le, ri] = q.front(); q.pop(); cout << "here: " << u << " " << le << " " << ri << " len " << ri - le << endl; assert(prev_len >= ri - le); prev_len = ri - le; if (ri - le == 1) continue; int mi = get_midpoint(le, ri); // int mi = (le + ri + 1) / 2; q.push({2 * u, le, mi}); q.push({2 * u + 1, mi, ri}); } cout << endl << endl; } return 0; } Meaning the set of nodes with length >= X will be some prefix of [1, 2 * n)
•  » » 3 weeks ago, # ^ |   0 sketch of proofinduction assumption: the nodes on the i'th row have lengths:2^(k+1), ..., 2^(k+1), x, 2^k, ..., 2^kwith 2^k < x < 2^(k+1)then going from the i'th row to the (i+1)'th row: segments of length 2^(k+1) split into 2 childs of length 2^k segments of length 2^k split into 2 childs of length 2^(k-1) the segment of length x splits into childs of lengths: 2^k, y; with 2^(k-1) < y < 2^k y, 2^(k-1); with 2^(k-1) < y < 2^k 2^k, 2^(k-1) thus the (i+1)'th row looks like:2^k, ..., 2^k, y, 2^(k-1), ..., 2^(k-1)with 2^(k-1) < y < 2^k
 » 3 weeks ago, # |   0 I still don't really get why space complexity is $O(nlogn)$ in general?
•  » » 3 weeks ago, # ^ | ← Rev. 2 →   +24 Maybe you mean $O(\frac{n}{\lg n})$? An intuitive way is to think it as building segment tree over blocks (that is, each leaf represent a block in segment tree), where one block have $O(\lg n)$ elements, then we have $O(\frac{n}{\lg n})$ blocks and building a segment tree over it require $O(\frac{n}{\lg n})$ nodes.
•  » » 3 weeks ago, # ^ |   +16 If you mean in the last paragraph, that's an example where you build a segment tree for every bit position. So a regular segment tree will use $O(log n \cdot n)$ nodes, while the one in the blog uses $O(log n \cdot \frac{n}{log n}) = O(n)$ nodes.
 » 3 weeks ago, # |   +20 Nice. To me, it feels like a combination of segment tree and sqrt decomposition ideas.
 » 3 weeks ago, # | ← Rev. 3 →   +8 There were DSA problems that I used something similar, like the SQRT decomposition idea, but only be able to get (or atleast to prove it at most) $O(N \log \log N)$ complexity. It is nice to know more about Segment Tree, thanks.