It might be a well-known technique, but as a cyan, my perspective is quite limited. Regardless, I independently invented this and am posting it here.
By the way, interesting educational blogs on Codeforces have been becoming increasingly rare lately...
Example Problem
Given an array of length $$$n$$$, initially filled with $$$1$$$s. You need to handle $$$Q$$$ operations.
For the function $$$f(x)=\frac{1}{x}$$$:
Given an interval $$$[l,r]$$$ and a positive integer $$$x$$$, add $$$x$$$ to all elements in the interval.
Given an interval $$$[l,r]$$$, compute $$$\sum_{i=l}^r f(a[i])$$$, preserving four significant decimal digits of precision.
Before this, I asked chromate00, who said this problem seemed utterly ridiculous and completely impossible at first glance. I believe anyone seeing this problem for the first time would agree. This blog will present a technique to solve this problem, and it can also replace $$$f(x)=\frac{1}{x}$$$ with any function whose derivative is relatively small (this might sound vague—I hadn't calculated the exact criteria before retiring, but perhaps any function with a codomain in $$$[0,1]$$$ would work? Feel free to add insights).
Sol
When handling a single number for $$$f(x)=\frac{1}{x}$$$, the maximum acceptable error is $$$\epsilon/n$$$. Thus, its structure resembles a division-based partitioning (like block division for integer division). This ensures that the entire range $$$\mathbb{R}^+$$$ is divided into $$$\sqrt{n/\epsilon}$$$ intervals, keeping the error of $$$f(x)$$$ within reasonable bounds for each interval. Of course, this method is too naive. With a complexity of $$$O(n^{1.5} \cdot \epsilon^{-1})$$$ multiplied by many $$$\log$$$ factors, it can't even outperform the most basic brute-force approach. Inspired by DIVCNT1 (where min25 used linear functions instead of simple constants to fit the $$$f$$$ curve), we attempt to approximate $$$f$$$ using multiple polynomial functions—essentially a Taylor expansion. A segment tree can easily maintain the sum of polynomial functions.
But here's a problem :nerd: what if, after incrementing a number, it crosses the boundary of the current polynomial and enters a new one? The solution is simple—since there are only increments and no decrements, each position crosses polynomial boundaries a limited number of times. Thus, brute-force maintenance suffices.
Complexity Analysis
I can't prove it? My guess is $$$O(\log ^{\text{a lot}} \cdot (n \cdot \epsilon^{-1})^{1+1/k} \cdot k^3)$$$. There are many $$$\log$$$ factors here, but I provide code that, at least for $$$2 \times 10^5$$$ data, is atleast 3x as fast as brute-force.








