But if I just erase first three lines of that solution, then solution time changes from 4s+ to 1.2s why?
# | User | Rating |
---|---|---|
1 | tourist | 3985 |
2 | jiangly | 3741 |
3 | jqdai0815 | 3682 |
4 | Benq | 3529 |
5 | orzdevinwang | 3526 |
6 | ksun48 | 3489 |
7 | Radewoosh | 3483 |
8 | Kevin114514 | 3443 |
9 | ecnerwala | 3392 |
9 | Um_nik | 3392 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 158 |
5 | -is-this-fft- | 158 |
7 | awoo | 156 |
8 | djm03178 | 155 |
9 | TheScrasse | 154 |
10 | Dominater069 | 153 |
But if I just erase first three lines of that solution, then solution time changes from 4s+ to 1.2s why?
Name |
---|
Just wait for the open hacking phase to finish. You will have your answer.
I tried with worst-case but it passes through.
I think the solution is relying on the third line
#pragma GCC optimization ("unroll-loops")
. I guess GCC will be able to optimize it so that it's only O(n) (or something similar in run time). I think this actually might be much harder to hack than expected.#pragma GCC optimize("Ofast")
#pragma GCC target("avx,avx2,fma")
These commands are important, you can check that it passes without "unroll-loops".Obviously, compiler can't optimize this to $$$O(n)$$$ (see runtime — almost second), it is just very fast $$$O(n^2)$$$. And I bet it won't be hacked — you can check by yourself, this code runs less than a second on maxtest.
How does this happen?? Why the solution could pass even the worst test case?
C++ is fast , especially with this type of operations and #pragma GCC optimize("Ofast"). Without any optimizations it runs 3.5 seconds.
Quite a few times in CP, we deal with problems where the naive solution gives O(n^2) time complexity. Owing to the constraints which are usually of the order of 10^5 and 2 seconds, we optimize our code to O(n). But I had this question, if we can solve all of these problems using the O(n^2) naive solution and this optimization.
Not all of them.
So, is the above mentioned case, an exception?
I wonder, why this not always work then? what it depends on exactly.
Depends on many things such as type of operations and whether it can be optimized by compiler.
So can this sometimes make the program even slower? Because otherwise I would include it every time and it could probably make the program faster.
how can it do it in only O(n)? That is better than most people's optimal solution(which is O(nlog(n))?) Oh, and also, the compiler cannot optimize comparisons in that case. Because it knows that it has to continue through the loops to get the answer(there is no pattern found, or no case where it is useless to continue the loop anymore). If what you are saying is true, then we would have a sort that works in O(n) lol. That would be better than the best sort which works in O(nlog(n)) with any numbers/values/datatype.
I tried all 6 possible cases with first three line and found that this line "#pragma GCC optimize("Ofast") " must be included for soln to pass
Only that is not enough, check
89966688 It is enough
Wow, didn't know that 64-bit was that fast. Thanks.
It is fast because,
how can we actively try to write such code,
i found the youtube video about
return condition false * answer + condition true * otherans //zero/false will cancel it
and instead of if (x>4) return true else return false
writing return x>4
Most of the time compiler optimizes branches, mostly try to avoid false*ans1+true*ans2, use ?: instead
I think the only solution to this is that if O(n) is expected, then make the time limit at most 0.8 second. It's 20 times slower than the intended solution.
Then,it will be very unfair for python users
Limit C++ only then.
Making the n up to $$$3*10^5$$$ is enough. Most problems which its intended solution is $$$O(n)$$$ or $$$O(nlog_2(n))$$$ are $$$3*10^5$$$ which would obviously make that $$$O(n^2)$$$ solution to fail. I wonder why this time it is only $$$10^5$$$...
I think increasing max N is typically better.
Increasing max N 10 times would increase n^2 solution time 100 times.
Probably it will fail Python. The thing that there is a time limit equality among all the languages, must be changed, imho.
Less likely than with decreasing time limit
I got TLE during the contest(using O(n^2) solution). But using your first 3 lines , it's accepted now.
From what I see with this pragma code begins to use SSE instructions, so I assume it basically processes four values at a time
if i use those 3 lines every solution,, can i get advantage all time or not??
Sometimes, it mostly depends on structure of code
LoL, quite unexpected, but cool!
Turns out you can do order of 10^10 operations in a second on CodeForces servers.
Not accurate. $$$10^{10}$$$ operations would theoretically take around 1800 ms or more in that problem with the same conditions. The number of operations is specifically in worst case is $$$n*(n-1)/2$$$ which is around $$$5*10^9$$$ operations.
More or less accurate:
The number of loop iterations is 5 * 10^9,
but each loop iteration consists of several operations: ans+=(ll)(a[j]-a[i-1]==j-i+1);
and then there are also operations to do looping: one increment and one comparison per iteration.
Oh, yes you are right. So that is two array accesses and one comparison and increment in each operation. You are right!
Yes, but in this case compiler uses AVX instruction set to calculate 4 iterations at same time, if you comment
#pragma GCC target("avx,avx2,fma")
time will jump to 4 seconds.pragma added to default template....lol
Also, I wonder how many people lost score on this because of failing to break this....
98 hacking attempts were made.
That's massive....I am glad that I do not participate in hacking...
You don't lose or gain score in open hacking.
I see...my bad then...
Also see this problem, the brute-force solution also passed using command-lines.
Command-lines do make your solution faster, it might pass $$$10^{10}$$$ operations within seconds, it's almost impossible to hack because it's about the basic-level optimization.
i have also used the above concept but was getting tle then i read this blog and used those 3 lines but still i am getting tle
It won't work with any $$$O(n^2)$$$ solution, it needs to be coded in very specific way, so compiler can understand and optimize it.