This blog came about because of some pretty random events.↵
↵
- A while back [user:ponysalvaje,2023-10-16] explained me howLDR: just take [this blog of mine](https://mirror.codeforces.com/blog/entry/121381) and sprinkle some [canonical skew-binary numbers](https://mirror.codeforces.com/blog/entry/74847) and boom, you have a stupid version of the Fenwick Ttree works and some cool tricks you can do with it.↵
↵
- Some time later, I saw [this blog](https://mirror.codeforces.com/blog/entry/74847). As soon as I read the first paragraph, I stopped and tried to invent such a data structure myself. I thought of storing jumps of distance $LSB(depth_u)$. This works, but the ancestor queries are actually $O(\log(N)^{2})$ in the worst case. Adding new leaf requirthat is asymptotically faster that the Fenwick tree on an obscure operation.↵
↵
## The idea↵
↵
I explained on [my previous blog](https://mirror.codeforces.com/blog/entry/121381) that we can think of the Fenwick tree as a set of intervals, and that we can use those intervals to answer any range query in $O(\log(N)^{2})$. But there is no real reason to use exactly the same intervals that the Fenwick tree uses. Instead we can try to use some other set of intervals to make things more efficient (at least in an asymptotic sense).↵
↵
Instead, we use intervals based on the jump pointers described on [that one blog from the catalog](https://mirror.codeforces.com/blog/entry/74847). That blog claims that we can jump any distance from any node in $O(\log(N))$ by following a greedy algorithm (the same greedy I used to perform range queriesaon ancestor query, but it happens to actually be $O(\log(N))$ because of some happy accidents.↵
↵
- This made me realize that you can compute any range query on an FT, not just prefixes (but Fenwick tree).↵
↵
To be clear I put things in bullet points:↵
↵
- We want to compute range sum queries on some array `A`↵
- The data structure stores `tree[i] = sum of A(i-jmp[i], i]` (yes, an open-closed interval)↵
- The jumps are defined recursively: if `jmp[i] == jmp[i-jmp[i]]` then `jmp[i+1] = 2*jmp[i]+1` and otherwise `jmp[i+1] = 1`↵
- The greedy algorithm to perform queries goes as follows:↵
↵
```c++↵
int A[MAXN+1], jmp[MAXN+1];↵
long long tree[MAXN+1];↵
↵
int range(int i, int j) { // inclusive, 1-indexed↵
while (j-i+1 > 0) {↵
if (j-i+1 >= jmp[j]) {↵
ans += tree[j];↵
j -= jmp[j];↵
} else {↵
ans += A[j];↵
j -= 1;↵
}↵
}↵
}↵
```↵
↵
Some observations:↵
↵
- The above algorithm performs any query in $O(\log(N)^{2})$ time). Because of this, I wrote [this other blog](https://mirror.codeforces.com/blog/entry/121381).↵
↵
- Then, IThis is due to some properties of skew-binary numbers (trust me bro...)↵
- The jumps are well-nested and they have power of two (minus one) length↵
- It follows that each position is covered by up to $\log(N)+O(1)$ intervals↵
- In other words, when we change a value we only need to update $O(\log(N))$ intervals↵
↵
Sadly, I haven't figuread the original blog and thought that the idea presenteout how to compute which intervals cover a particular index (nor the jumps) on-line but we can just precompute them.↵
↵
Below is the precomputation of the jumps and the intervals that cover each index:↵
↵
```c++↵
vector<int> covered[MAXN+1];↵
void in it can also be used to compute range queries and updates. (I didn't prove it but it seems obvious that each position in the array is covered by, at most, $\log(N)$ ranges)() {↵
jmp[1] = jmp[2] = 1;↵
for (int i = 2; i+1 <= MAXN; ++i) {↵
if (jmp[i] == jmp[i-jmp[i]]) {↵
jmp[i+1] = 2 * jmp[i] + 1;↵
} else {↵
jmp[i+1] = 1;↵
}↵
}↵
for (int i = 1; i <= MAXN; ++i) {↵
for (int j = i; i > j-jmp[i]; ++i) {↵
covered[j].push_back(i);↵
}↵
}↵
}↵
```↵
↵
Since we have precomputed the intervals that cover each index, updating them is trivial.↵
↵
```c++↵
void update(int i, int x) {↵
A[i] += x;↵
for (int j : covered[i]) tree[j] += x;↵
}↵
```↵
↵
We can do this with less memory and time by storing the smallest interval that strictly covers each interval, and walking that as a linked list. It also looks trivial to construct given the way the jumps are constructed but this data structure is too useless for me to care.↵
↵
Thanks to [user:ponysalvaje,2023-11-22] for explaining how Fenwick tree works.
↵
- A while back [user:ponysalvaje,2023-10-16] explained me how
↵
- Some time later, I saw [this blog](https://mirror.codeforces.com/blog/entry/74847). As soon as I read the first paragraph, I stopped and tried to invent such a data structure myself. I thought of storing jumps of distance $LSB(depth_u)$. This works, but the ancestor queries are actually $O(\log(N)^{2})$ in the worst case. Adding new leaf requir
↵
## The idea↵
↵
I explained on [my previous blog](https://mirror.codeforces.com/blog/entry/121381) that we can think of the Fenwick tree as a set of intervals, and that we can use those intervals to answer any range query in $O(\log(N)^{2})$. But there is no real reason to use exactly the same intervals that the Fenwick tree uses. Instead we can try to use some other set of intervals to make things more efficient (at least in an asymptotic sense).↵
↵
Instead, we use intervals based on the jump pointers described on [that one blog from the catalog](https://mirror.codeforces.com/blog/entry/74847). That blog claims that we can jump any distance from any node in $O(\log(N))$ by following a greedy algorithm (the same greedy I used to perform range queries
↵
- This made me realize that you can compute any range query on an FT, not just prefixes (but
↵
To be clear I put things in bullet points:↵
↵
- We want to compute range sum queries on some array `A`↵
- The data structure stores `tree[i] = sum of A(i-jmp[i], i]` (yes, an open-closed interval)↵
- The jumps are defined recursively: if `jmp[i] == jmp[i-jmp[i]]` then `jmp[i+1] = 2*jmp[i]+1` and otherwise `jmp[i+1] = 1`↵
- The greedy algorithm to perform queries goes as follows:↵
↵
```c++↵
int A[MAXN+1], jmp[MAXN+1];↵
long long tree[MAXN+1];↵
↵
int range(int i, int j) { // inclusive, 1-indexed↵
while (j-i+1 > 0) {↵
if (j-i+1 >= jmp[j]) {↵
ans += tree[j];↵
j -= jmp[j];↵
} else {↵
ans += A[j];↵
j -= 1;↵
}↵
}↵
}↵
```↵
↵
Some observations:↵
↵
- The above algorithm performs any query in $O(\log(N)
↵
- Then, I
- The jumps are well-nested and they have power of two (minus one) length↵
- It follows that each position is covered by up to $\log(N)+O(1)$ intervals↵
- In other words, when we change a value we only need to update $O(\log(N))$ intervals↵
↵
Sadly, I haven't figure
↵
Below is the precomputation of the jumps and the intervals that cover each index:↵
↵
```c++↵
vector<int> covered[MAXN+1];↵
void in
jmp[1] = jmp[2] = 1;↵
for (int i = 2; i+1 <= MAXN; ++i) {↵
if (jmp[i] == jmp[i-jmp[i]]) {↵
jmp[i+1] = 2 * jmp[i] + 1;↵
} else {↵
jmp[i+1] = 1;↵
}↵
}↵
for (int i = 1; i <= MAXN; ++i) {↵
for (int j = i; i > j-jmp[i]; ++i) {↵
covered[j].push_back(i);↵
}↵
}↵
}↵
```↵
↵
Since we have precomputed the intervals that cover each index, updating them is trivial.↵
↵
```c++↵
void update(int i, int x) {↵
A[i] += x;↵
for (int j : covered[i]) tree[j] += x;↵
}↵
```↵
↵
We can do this with less memory and time by storing the smallest interval that strictly covers each interval, and walking that as a linked list. It also looks trivial to construct given the way the jumps are constructed but this data structure is too useless for me to care.↵
↵
Thanks to [user:ponysalvaje,2023-11-22] for explaining how Fenwick tree works.