**TL; DR**

**Note : i don't know the original name of this data structure (if it has one) if you know comment it, in this blog i will name it Dynamic Prefix Sum**

## Prefix Sums

if you have an arbitrary array $$$a$$$ of size $$$n$$$ and had $$$q$$$ queries in pairs of $$$(l, r)$$$ and wanted $$$\sum_{i=l}^{r} a_i$$$, you can brute force it and it will take $$$O(nq)$$$ time which is going to get TLE, so you can use a method called prefix sums(which if you are interested in you can discover more things about it in this link). this method uses somewhat dynamic programming (beginner level) to solve the problem, basically you define another array $$$ps$$$ of length $$$n+1$$$($$$ps$$$ array is 0-based indexed) and you assign $$$ps_0 = 0$$$, then for each $$$i$$$ from $$$1$$$ to $$$n+1$$$ you are going to use the following formula:

$$$ps_i = ps_{i-1} + a_{i-1}$$$ (assuming 0-based indexing for $$$a$$$, if its 1-based you should do $$$a_i$$$)

after the prefix sum array is completed each element like $$$ps_i$$$ is storing the sum of elements from the beginning of the array to $$$i$$$, so: $$$ps_i = \sum_{j=0}^{i-1} a_j$$$

using this to answer each query you should print $$$ps_r - ps_{l-1}$$$, so it can be solved in $$$O(n + q)$$$ time complexity.

## Faster Updates!

but now what if we want to update some points, like we want to set the value of some $$$a_i$$$ to $$$k$$$, we should update all prefix sums from $$$i$$$ to $$$n$$$ which in worst case is $$$O(n)$$$ which is slow. what we do now is we use a data structure called Square Root Decomposition, but not on the original array but on the prefix sum array, for more info about square root (sqrt) decomposition you can refer to this link, but for now the only thing you should know about it is that it can answer point queries (what is the i'th element in the array right now after the updates) in $$$O(1)$$$ and can do range updates(updates that require increasing all of the elements from $$$l$$$ to $$$r$$$ in an array) in $$$O(\sqrt n)$$$, now we can use this to boost up our prefix sum, so we decompose our prefix sum array, then we can do updates in $$$O(\sqrt n)$$$ how? we basically need a range update from $$$i$$$(updated position) to $$$n$$$ in the prefix sum array. now why we can't use Segment Trees with Lazy updates instead of Square Root Decomposition? because i don't know how to do point queries in Segment Trees in $$$O(1)$$$ (if you know comment it!), and if we can't do that we can't access the prefix sum array for range sum queries.

so here is the code:

**code**

and now we will have a data structure with time complexity of $$$O(n)$$$ build time, $$$O(1)$$$ range query time, $$$O(\sqrt n)$$$ point update time.

## Comparing it with other similar data structures

after comparing the actual runtime of this data structure and some other similar ones i found out that: for a place which has nearly 50% updates and 50% queries, the best option is **Fenwick Tree** which is very easy to implement and has very low constant factor, also having $$$O(n)$$$ build, $$$O(\log n)$$$ query & update, but for a place with more queries than updates the method said in the blog is also a valid option, also if you have a lot of updates, either take the Fenwick which is superfast, or the normal square root decomposition. if you don't have updates use the normal prefix sums.

Comparing the Time Complexity of different data structures: the notation $$$[O(b), O(q), O(u)]$$$ means(from left to right): build — query — update time complexity:

Brute Force = $$$[O(1), O(n), O(1)]$$$

Fenwick & Segment Tree = $$$[O(n), O(\log n), O(\log n)]$$$

Prefix Sum = $$$[O(n), O(1), O(n)]$$$

Dynamic Prefix Sum = $$$[O(n), O(1), O(\sqrt n)]$$$

Square Root Decomposition = $$$[O(n), O(\sqrt n), O(1)]$$$

Square Root Tree = $$$[O(n \log \log n), O(1), O(\sqrt n)]$$$

one of the biggest disadvantages of this method is that its not useful for minimum / maximum / gcd / lcm / ... queries, but it can be used for multiplication , bitwise operations(| and & are don't have an optimal way and is not recommended but for ^ there is no problem)

just do regular square root decomposition on array and then you do not need invertibility

you are correct, as i said, for a place which you know you have more queries than updates this method is useful, in that case using square root decomposition will be very slower

Sorry, didn't read that you are describing $$$O(1)/O(\sqrt{n})$$$ structure. This can be found here in the bonus section.

thanks!

that was interesting! :D