1790A - Polycarp and the Day of Pi

Idea: MikeMirzayanov

**Tutorial**

Tutorial is loading...

**Solution**

```
t = int(input())
pi = '31415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679'
for _ in range(t):
n = input() + '#'
for i in range(len(n)):
if pi[i] != n[i]:
print(i)
break
```

Idea: Gornak40

**Tutorial**

Tutorial is loading...

**Solution**

```
#include <bits/stdc++.h>
using namespace std;
const int MAXN = 55;
int n, s1, s2;
vector<int> res;
void solve() {
res.clear();
int d = s1 - s2;
for (; s2 >= d; s2 -= d)
res.push_back(d);
if (s2) res.push_back(s2);
for (int i = 0; i < res.size() && res.size() + 1 < n;) {
if (res[i] == 1) {
++i;
continue;
}
--res[i];
res.push_back(1);
}
res.push_back(d);
}
int main() {
int t; cin >> t;
while (t--) {
cin >> n >> s1 >> s2;
solve();
sort(res.begin(), res.end());
for (int x: res)
cout << x << ' ';
cout << endl;
}
}
```

Idea: MikeMirzayanov

**Tutorial**

Tutorial is loading...

**Solution**

```
#include "bits/stdc++.h"
using namespace std;
int n;
void solve(){
cin >> n;
vector<vector<int>>perm(n, vector<int>(n - 1));
vector<int>p(n, 0);
vector<int>cnt(n + 1, 0);
for(int i = 0; i < n; i++){
p[i] = i + 1;
for(int j = 0; j < n - 1; j++){
cin >> perm[i][j];
if(j == 0) cnt[perm[i][j]]++;
}
}
for(int i = 1; i <= n; i++){
if(cnt[i] == n - 1){
p[0] = i;
break;
}
}
for(int i = 0; i < n; i++){
if(perm[i][0] != p[0]){
for(int j = 0; j < n - 1; j++){
p[j + 1] = perm[i][j];
}
}
}
for(int i = 0; i < n; i++) cout << p[i] << ' ';
cout << endl;
}
int main(){
int t;
cin >> t;
while(t--){
solve();
}
return 0;
}
```

Idea: MikeMirzayanov

**Tutorial**

Tutorial is loading...

**Solution**

```
#include <iostream>
#include <vector>
#include <queue>
#include <map>
#include <set>
using namespace std;
void solve() {
int n;
cin >> n;
vector<int> a(n);
map<int, int> cnt;
set<int> b;
for (int i = 0; i < n; ++i) {
cin >> a[i];
cnt[a[i]]++;
b.insert(a[i]);
b.insert(a[i] + 1);
}
int last = 0;
int res = 0;
for (auto x: b) {
int c = cnt[x];
res += max(0, c - last);
last = c;
}
cout << res << '\n';
}
int main(int argc, char* argv[]) {
int t;
cin >> t;
for (int i = 0; i < t; ++i) {
solve();
}
}
```

1790E - Vlad and a Pair of Numbers

Idea: Vladosiya

**Tutorial**

Tutorial is loading...

**Solution**

```
t = int(input())
for _ in range(t):
x = int(input())
a = x
b = 0
for i in range(32, -1, -1):
if x & (1 << i) > 0:
continue
if 2 * x - a - b >= (2 << i):
a += 1 << i
b += 1 << i
if a + b == 2 * x and a ^ b == x:
print(a, b)
else:
print(-1)
```

1790F - Timofey and Black-White Tree

Idea: molney

**Tutorial**

Tutorial is loading...

**Solution**

```
#include <bits/stdc++.h>
using namespace std;
const int MAXN = 200200;
const int INF = 1e9;
int n, ANS = INF;
int crr[MAXN], dist[MAXN], res[MAXN];
bool clr[MAXN];
vector<int> gr[MAXN];
void init() {
ANS = INF;
for (int v = 0; v < n; ++v)
gr[v].clear();
fill(dist, dist + n, INF);
memset(clr, 0, n);
}
void dfs(int v, int p) {
if (dist[v] >= ANS) return;
if (clr[v]) ANS = min(ANS, dist[v]);
for (int u: gr[v]) {
if (u == p) continue;
if (dist[v] + 1 < dist[u]) {
dist[u] = dist[v] + 1;
dfs(u, v);
} else ANS = min(ANS, dist[v] + 1 + dist[u]);
}
}
void solve() {
dist[*crr] = 0;
dfs(*crr, -1);
clr[*crr] = true;
for (int i = 1; i < n; ++i) {
dist[crr[i]] = 0;
dfs(crr[i], -1);
clr[crr[i]] = true;
res[i] = ANS;
}
}
int main() {
int gorilla; cin >> gorilla;
while (gorilla--) {
cin >> n >> *crr, --(*crr);
init();
for (int i = 1; i < n; ++i)
cin >> crr[i], --crr[i];
for (int i = 1; i < n; ++i) {
int v, u; cin >> v >> u, --v, --u;
gr[v].push_back(u);
gr[u].push_back(v);
}
solve();
for (int i = 1; i < n; ++i)
cout << res[i] << ' ';
cout << '\n';
}
}
```

Idea: senjougaharin

**Tutorial**

Tutorial is loading...

**Solution**

```
#include<bits/stdc++.h>
using namespace std;
void solve()
{
int n;
cin >> n;
vector<int> token(n), boni(n);
vector<vector<int>> g(n);
vector<int> good(n);
int m;
cin >> m;
int p, b;
cin >> p >> b;
for(int i = 0; i < p; i++)
{
int x;
cin >> x;
--x;
token[x] = 1;
}
for(int i = 0; i < b; i++)
{
int x;
cin >> x;
--x;
boni[x] = 1;
}
for(int i = 0; i < m; i++)
{
int x, y;
cin >> x >> y;
--x;
--y;
g[x].push_back(y);
g[y].push_back(x);
}
for(int i = 0; i < n; i++)
for(auto x : g[i])
if(boni[i] && boni[x]) good[i] = 1;
set<int> good_tokens;
set<int> not_so_good_tokens;
for(int i = 0; i < n; i++)
for(auto x : g[i])
{
if(token[i] && good[x]) good_tokens.insert(i);
else if(token[i] && boni[x]) not_so_good_tokens.insert(i);
}
vector<int> d(n, int(1e9));
queue<int> q;
d[0] = 0;
q.push(0);
while(!q.empty())
{
int k = q.front();
q.pop();
for(auto x : g[k])
{
if(d[x] > d[k] + 1)
{
d[x] = d[k] + 1;
if(boni[x]) q.push(x);
}
}
}
bool has_ans = false;
for(int i = 0; i < n; i++)
{
if(!token[i] || d[i] > n) continue;
has_ans |= (!good_tokens.empty() && (*good_tokens.begin() != i || *good_tokens.rbegin() != i));
int cnt = not_so_good_tokens.size();
if(not_so_good_tokens.count(i)) cnt--;
has_ans |= d[i] <= 1 + cnt;
}
cout << (has_ans ? "YES" : "NO") << endl;
}
int main()
{
ios_base::sync_with_stdio(0);
cin.tie(0);
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
#endif
int tc = 1;
cin >> tc;
for(int i = 0; i < tc; i++)
{
solve();
}
}
```

I think F is interesting, many ways to solve

And it's very similar to E. Xenia and Tree

memeFor F I would recommend tourist solution .

It's TC is nlogn?

Edit: It's TC is nlogn '.'

It actually is O(n log n). Without going into the details of the solution, it's easy to see from the code that its time complexity is O(sum of answers for all operations), i.e. the sum of all numbers that we are supposed to output.

Claim: after k operations, the minimum length of path between black nodes is at most 2 * (n / k), for k > 0.

Proof by contradiction: Suppose that the minimum path length is greater than 2 * (n / k). Then, for every black vertex v there are at least (n / k) nodes u such that dist(u, v) <= n / k, and by the assumption all these u's must be white, because (n / k) < 2 * (n / k). Let cnt(v) denote the exact number of these white nodes for vertex v. Let S = sum of cnt(v) across all black v. I claim that no white vertex is counted twice in S. Because if it is, then it has a distance of at most (n / k) to 2 different black nodes; but then the distance between those 2 black nodes is at most 2 * (n / k), which contradicts our original assumption. Thus, S counts each white vertex at most once, and is therefore bounded by (n — k). But we also know that each of the k black vertices contributes at least (n / k) to S. Then,

S <= n — k --> k * (n / k) <= (n — k) --> k <= 0, which is a contradiction. This finishes the proof.

Now the TC of the algorithm is the sum of 2 * (n / k) for all k from 1 to n. But this is just a sum of harmonic series, the value of which is O(n log n).

Finally understood the proof completely after trying to upsolve this problem for 1 day :)

Another way to prove your claim is by thinking about the euler tour of a tree.

Intuition: It's straightforward to calculate the maximum value of the minimum distance between any two elements after some $$$K$$$ operations in the case of a list instead of a tree. So let's try to represent the "tree like a list".

Your claim would have been

trivialif we were given a list instead of a tree in the problem. You can "transform" a tree and represent it using the list of nodes visited during the euler tour of the tree. The difference of position between any two elements in the list will always be greater or equal to the actual distance between the nodes (that are being represented through those elements) in the tree. Like in this case, the difference in position between nodes 6 and 1 is 4, whereas the actual distance is 3.ImageNow we have a list of $$$2N - 1$$$ elements, and we know that even if these elements were independent of each other (same node does not appear in the list twice), the answer would be bounded by $$$\lceil\frac{2N - 1}{K}\rceil$$$.

Note The bound works for the tree as well because we know whatever the answer for our list is, the actual answer for our tree is lesser or equal.

This is indeed very simple, thanks a lot for the alternative approach!

the round was great thank you so much!

when will system tests take place?

Is this contest unrated?

Looks like it.

No.(Maybe the system test isn't started is because there's still someone hacking.)

it isn't, updating rating for div3/4/edu rounds takes a while

All problems are interesting! Like them. But a little difficult for div 3:)

I thought that ABCDE were pretty easy, but FG were too hard

Will You Please Check If my solution is correct for problem E as i did it in O(1) Complexity 190841912

Bruh it's accepted

Yeah I Know . Just wanted to confirm if this approach was also correct

It's correct.

On task B, the values of the elements are bounded from $$$1$$$ to $$$6$$$. So, even if you set everything to $$$s-r$$$ and then subtract manually, the number of operations in the worst case is $$$5n-5$$$, which is still $$$O(n)$$$.

vector res is sorted as well so it should be O(nlogn)

Nope, my solution goes for an entirely different approach. I just start with $$$s-r$$$ on every index, and loop through $$$[1,n)$$$ (0-indexed), subtracting $$$1$$$ everytime until we get $$$s$$$ sum in total. The worst case time complexity is $$$O(n)$$$ due to the limit in the elements' values.

okay I thought you were talking about the solution given in tutorial its complexity is given as O(N*N). Even I went with a similar approach that is printed s-r until sum becomes less than s-r then divided the sum left(if any) on the remaining number of elements to be printed. So the TC was O(N) in my approach.

My solution for task B: 190928632 complexity: O(n)

D is harder than E . E was quite Simple.

what??

190841912 Checkout My Submission

Can you pls explain how you arrived at your solution?

wrong approach .Sorry!!

Hey, I don't think XOR is distributive over addition. For example, take a = 10 and b = 11(in binary). Then a+b = 101 and (a+b)^a = 111. now take a^a+a^b= a^b = 01. So, (a+b)^a is not the same as a^a+a^b

Yes My bad!!! Sorry.

True the solution was possible with TC O(1) as well.

I also did it with Complexity O(1)

Yes I saw your submission

F is a standard problem for centroid decomposition — build centroid tree, process two types of requests — add a new black vertex and find closest.

When we insert a new black vertex, just push into all vertexes before us the distance to that centroid.

When we get the distance, just iterate over all ancestors and relax ans with min(ans, d(v, cent) + mn[cent])

Working code: https://mirror.codeforces.com/contest/1790/submission/190980021

what does relax ans mean?

If $$$ans_i$$$ is on the i-th iteration, we take minimum of previous $$$ans_{i - 1}$$$ and the current calculated minimum distance.

ohk thx

https://mirror.codeforces.com/contest/1790/submission/190946802 Can you tell me the time complexity of f solution and optimize this solution?? btw let me tell my solution:

Initially what I did is I found the min distance of all the node from black node and stored in an array.

Then during any query I found minimum distance between two black nodes which takes order of 1 time then accordingly updates the minimum distance of the nodes using BFS and update until new distance is smaller then minimum

Can someone help me understand why my problem F will TLE?

My idea is to go through each black node and do a dfs up to answer steps, (answer is the min distant so far) and see if I can find another black node that is closer than answer. If I find one, update answer so that next time we can dfs less depth. However, this will tle at test case 18. I thought the time complexity is O(n^1.5), please point out my mistake.

My submission 190916688

Thank you for your help.

Here is a counter use case that will lead to O(n^2) for this method. A star graph with 1 in the middle and all other nodes connect to 1. Color everything other than 1 to black, then you will see every time, we need to traversal all the edges connect to 1, which lead to O(n^2).

So we need to update the dist[curr] to be the shortest to black node and stop if keep going will not make a better answer. Above case, we should stop at 1 every time because keep going to 1's neighbor will not make things better.

Can someone explain tourist solution for problem f in detail

To anyone who has understood tourist's solution, please correct me if I am wrong

Whenever we convert a node N1 black, we consider it's ancestors in hierarchical order and recompute their best distance as the min of the distance from node N1 and it's previous best distance until we encounter the root of the tree or we encounter an ancestor which has best distance better than it's distance from N1 and we break( because all the rest ancestors can't possibly be benifited from node N1 ) or we reach an ancestor which has distance from N1 greater than current ans(as it will never ever contribute to our actual answer as the answer always decreases or stays the same)

IDK about it's TC tho

There's a different solution for problem C. link

https://mirror.codeforces.com/contest/1790/submission/190946802 Can any body tell me the time complexity of f solution and optimize this solution?? btw let me tell my solution:

Initially what I did is I found the min distance of all the node from black node and stored in an array.

Then during any query I found minimum distance between two black nodes which takes order of 1 time then accordingly updates the minimum distance of the nodes using BFS and update until new distance is smaller then minimum

Please start the system testing. Why does it take so long when other platforms give the new ratings 15 mins after the end contest.

Now it finally starts.

E can be done with binary search too

here is my implementaion 191020389

for binary search doesn't f(x) have to be monotonic? sorry I'm not understanding how bs won't get stuck somewhere. In this case

is a solution for a so bs will always try it but how do you prove it without knowing the solution beforehand?

we have fixed number of bits i.e. of x, now we will alway assign a<=b such that a+b=2*x, now if we increase a b will decrease and also xor value of both will decrease, which means xor value is monotic decreasing with respect to a, hence we apply binary search l=1 and r=x, for the value of a .

if x=6, and a=2 and b=10, a^b=8. now if increase a by 1, a=3, b=9. and a^b=10. so on increasing a => a^b increases. now we increase a by 2. 5^7= 2. therefore a^b is not monotonic so not a candidate for binary search

.

Let me think a better explanation give me 5 min

I am applying binary search on a

If a,b pair exists when x=y for a being mid y=mid^(2*x-mid) and if x>y Then we have to increase xor it can only be done by decreasing a as told u earlier we are finding solution a<=b and if xor of a nd b to increase a should be decreased and b should be increased,and vice-versa for y<x

Can someone tell me in detail why the time complexity of the F of √n question came about?

I was asked to give more details on my solution of 1790E - Влад и пара чисел. I will also share it there.

Solution for problem EThis is a constructive problem. It has many different approaches and solutions.

We have $$$2$$$ operations, sum of the numbers and $$$\mathrm{xor}$$$ of the numbers. They must be equal. To be more precise, we want this equation to hold

$$$A + B = 2 \cdot (A\, \mathrm{xor}\, B) = 2 \cdot x$$$

First thing we want to think about is: how fast can a naive iterative solution be? For given $$$x$$$ we can iterate over all $$$A$$$ and get $$$B = A \, \mathrm{xor}\, x$$$. This can be done in $$$\mathcal{O}(x)$$$. We will not solve the problem this way (TLE will be the result), but we can implement this iterative solution for all small tests, generate answers and use them to find patterns and properties.

Take a look at this solution https://mirror.codeforces.com/contest/1790/submission/190992078

Even from this

`cerr`

output we can see that for all numbers that have answers, the first way to build this answer is to take$$$A = x/2$$$

$$$B = 3 \cdot x/2$$$

Actually this is enough to solve this problem. But that will not always be the case and generally you also want to know why does it work and how to prove it.

It makes sense to try analyzing individual bits for bitwise operations. Let's consider all possible combinations for bits in position $$$i$$$ in numbers $$$A$$$ and $$$B$$$:

We want to set bits in $$$A$$$ и $$$B$$$ in such a way, that it balances out $$$A + B$$$ and $$$2 \cdot (A\, \mathrm{xor}\, B)$$$ and also matches $$$x$$$. There is only one way to add the same bit $$$i$$$ simultaneously to $$$A + B$$$ and to $$$2 \cdot (A\, \mathrm{xor}\, B)$$$: make $$$A[i] = 0$$$, $$$B[i] = 1$$$, $$$A[i-1] = 1$$$, $$$B[i-1] = 1$$$. This way we will get $$$2$$$ binary digit overflows in sum and $$$\mathrm{xor}$$$ will have only one bit $$$i$$$ set.

`010`

$$$x$$$`001`

$$$A$$$`011`

$$$B$$$`100`

$$$A+B$$$`010`

$$$A\, \mathrm{xor}\, B$$$`100`

$$$(A\, \mathrm{xor}\, B) \cdot 2$$$This can be clearly seen on a small example $$$x=2, A=3, B=1$$$. The only way to set bit $$$i$$$ in number $$$x$$$ is to utilize two bits in numbers $$$A$$$ and $$$B$$$: bit $$$i$$$ and bit $$$(i-1)$$$. So if we want set bit $$$i$$$ in number $$$x$$$ to $$$1$$$, then bit $$$(i-1)$$$ must be set to $$$0$$$.

Thus answer exists for all possible numbers $$$x$$$ in which each bit equal to $$$1$$$ is preceded with bit equal to $$$0$$$. In other words for all $$$i$$$ that match $$$x[i] = 1$$$ we must also have $$$x[i-1] = 0$$$. This is also true for the lowest bit, it does not have any preceding bit, so it also must be equal to $$$0$$$

All possible answers for the same number $$$x$$$ differ from each other only in arrangements of bits in positions where those bits should be different ($$$A[i] = 1$$$ and $$$B[i] = 0$$$ or $$$A[i] = 0$$$ and $$$B[i] = 1$$$).

It can be easily done with binary search in same complexity

the most understandable tutorial, thank you

Is it possible to do F using a BFS? If I understand the tutorial correctly, we skip vertices v that have d[v] > ans. Similarly, we could do a BFS where the number of levels you visit does not exceed ans. Will this be as efficient as the DFS approach? Why or why not?

Can someone please explain solution of F more clearly. I didn’t understand the editorial solution that why it works and also the time complexity

I am really disappointing because its my first contest and i solved only two and i get only one right . but i think there is something wrong in q B-Taisia and Dice because in the q ( in any order /print any) and i got it right. like here : 3 12 6 my answer was : 6 3 3 3 + 3 = 6 and 6+ 6 = 12 and that is 3 dices. I am really disappointing but hey ! is my first one . thank you :) <3

So, I was thinking when to show my way to solve problems and I think that it's best if I do it only when at least I can solve three problems. So, like always don't focus on my code because it has less value that what I was thinking in the contest.

AThis problem I believe that everyone can solve but for example something curios that happen to me is that in the moment that I was writing the code for some reason I was doubting if 30 numbers after the first 3 or if the 3 count to the 30, but this problem it too simple and is about speed, so I decide to do the worst case that is that the count begin after the first three and search which are the first 30 numbers after the point and for the comparation of string use the length of the string that the problem give to make sure that I don't overflow or call something that doesn't exit.

codeBFor this problem onward I began to use the best weapon that have the beginners in my opinion and is don't overcomplicate only try to solve it like if you were doing it at hand and write the way that your brain think to solved it and later past it to code, this is really useful when you see that they give you variables that are really small, for example in this problem $$$n = 50$$$, which mean that you only have to fill 50 values of the dice maximum and $$$t = 1000$$$, so even in the most wild case there are only 5000 dice to fill which is a number pretty small for a problem. My way of solving it was thinking in the simplest steps are the following:

Make a dice value of $$$s-r$$$ that is the difference, in my code I think about the possibility that the difference is more than six and there is need of more dice, but they said that always exist the answer and only taking one dice will make the value of $$$s$$$ into $$$r$$$, so I had a bit of luck.

Fill the rest of the dice with what is necessary to make the total value of the dice into $$$s$$$, but I fill it in the most brute force possible that is one for one and only given plus one each time that I pass for some dice.

codeCThe same as the problem B, begin to try to solved yourself and you will notice that if you sum the position that a number have been at each iteration the one that sum the smallest is the number that is first in the original and the one that sum the biggest is the last one in the original and with it if you see the other numbers the most true that is become, the small the total sum of position is of a number the more to the left this will be and in the other case the more big the more to the right is his position the original array.

codeDIn this problem the reason that I see why someone couldn't do it are 3:

To solve the first one as a beginner I think that you must realize that you don't have anything to lose and that you must use it to your favor in a way that you always are pushing yourself to try to at least do one submit to prove that you try your best

In the second one, to be real I don't understand binary search to deeply but I know to use the functions lower_bound() and upper_bound(), this is my way to pass question that need low time complexity

And the third is that for example a nest of dolls that don't work is $$${1,2,4}$$$ but maybe you didn't read right that the different of each adjacent element is maximum 1, so my tip is to read a bit slower the more that you are close to the end of the contest.

codeHope that my way of thinking can help you in some kind of way to your next contest.

Alternate appraoach for D: Divide and Conquer — https://mirror.codeforces.com/contest/1790/submission/190924408

Like merge-sort, but instead the merge function calculates number of matryoshka sets that can be combined across left and right intervals.

Could someone tell me why use unordered_map in D will TLE and map will not? unordered_map code:191060893 map code:191061543

Hi, it's probably due to collisions, check this post for reference https://mirror.codeforces.com/blog/entry/62393.

I really don't understand the tutorial of problem F !! can anyone explain it in more details and prove its time complexity $$$O(n\sqrt{n})$$$ ?

I wonder how the bound of $$$O(n \sqrt{n})$$$ with DFS/BFS is proven. Suppose that we only update nodes with distance smaller than $$$2$$$. Even in such case, we can visit all $$$n$$$ nodes on certain graphs. Even if we know that $$$dist \le \sqrt{n}$$$, how can we ensure the number of visited nodes?

Exactly I have the same question. Even if the max distance is √n, for every vertex we have to visit √n*edges connect to the vertex which overall sum to what I dont know(but how can you say it of order n root n),plus I think dfs is working correct because it randomly choses vertex and replaces ans which in turn reduces total actual iterations of the vertex. I am saying this beacause I remember a dude complaing his code was giving TLE at 7th test case with bfs but it got AC with dfs.

With regard to my comment above, I would like to request a thorough check on the given solution of problem 1790F. I am a bit confident that the solution isn't correct (i.e. it may not run in $$$O(n \log n)$$$ or $$$O(n \sqrt{n})$$$ for all trees). Thanks a lot!

Can someone explain more solution of E?

Why i am getting MLE for problem D on first test case?CODE

Hi, could someone help me with my solution to 1790B? Test case 1 says that my solution doesn't pass because it doesn't add up to 15, but my solution outputs 6 3 3 3, which does add up to 15?

DFS-ish solution for problem D. MatryoshkasFirst, we build a frequency map as any approach would need to care of the counts of each toy. Here each key represents a toy-size and assosciated value represents number of such toys having the same size.

If we define a

directedgraph such that:-zerooronedirected edge outgoing from any node.We sort the available sizes(the input array) and start the DFS with the smallest size as the source of the dfs. Obviously the smallest toys are the toys that don't have any smaller neighbour to

"try-out", thus it makes sense to start at the smallest toys. Since this is a directed graph and there is only one neighbour to"try-out"per node. We, don't need the usualvisitedarray in DFS implementations. Instead, every time we visit a node $$$\implies$$$ we put one toy of that size in a set. Thus, we update the frequency map by subtracting 1.Ultimately, the number of connected components found by the DFS is the minimum number of sets that can be formed with the toys.

Sample code: 223730733

I don't understand in G why this test case expects NO:

test1

5 4

1 5

3

1 2 3 4 5

1 2

1 4

2 3

3 5

There is a path from 3 to 1 all having bonus, so we can reach 1 using rules. Can someone explain? Also sorry for necroposting

Problem C is the worst problem I've seen so far. ruined my day thanks

Can we solve F with Square Root Decompos ?

241357745

why I am getting TLE ? can anyone pls help me? thanks in advance

Do not sort.