| # | User | Rating |
|---|---|---|
| 1 | Benq | 3792 |
| 2 | VivaciousAubergine | 3647 |
| 3 | Kevin114514 | 3603 |
| 4 | jiangly | 3583 |
| 5 | turmax | 3559 |
| 6 | tourist | 3541 |
| 7 | strapple | 3515 |
| 8 | ksun48 | 3461 |
| 9 | dXqwq | 3436 |
| 10 | Otomachi_Una | 3413 |
| # | User | Contrib. |
|---|---|---|
| 1 | Qingyu | 157 |
| 2 | adamant | 153 |
| 3 | Um_nik | 147 |
| 4 | Proof_by_QED | 146 |
| 5 | Dominater069 | 145 |
| 6 | errorgorn | 142 |
| 7 | cry | 139 |
| 8 | YuukiS | 135 |
| 9 | TheScrasse | 134 |
| 10 | chromate00 | 133 |
|
0
In problem E, square root decompsition + binary indexed tree solution is getting accepted, when block_size is fixed about 800, but getting TLE when block_size is used by sqrt(n). In both solution each update is O(4*logn) and each query is O(2*block_size + (n/block_size)*logn) and pre build is O(nlogn). So, is data set weak for problem E? Isn't it possible to create such input so that this solution can't get accepted? Accepted using block_size = 800: 47504503 **Update: ** Accepted solution runs in about 2.5 sec, when block_size is 1700. |
| Name |
|---|


