My first ever blog on CodeForces here. Firstly, a huge thanks to MikeMirzayanov and all the people behind this wonderful ecosystem for allowing me to improve my programming skills, which enable me to think differently.
Now, onto the travesty, here is problem 3E from the most recent div3(1027). Here is the problem statement: https://mirror.codeforces.com/contest/2114/problem/E
The question itself is rather simple, but consider the following two submissions:
https://mirror.codeforces.com/contest/2114/submission/322609464
https://mirror.codeforces.com/contest/2114/submission/321513639
You may notice that these two codes have the same code. However, curiously, the PyPy compiler gives TLE while the Python compiler gives AC. Considering that CodeForces almost always asks me to switch to PyPy when I try to submit in Python since it is faster, it made me wonder what made Python run faster than PyPy. (I used the latest compilers for both versions.)
Any insights into this would be extremely useful.
Additionally, any inputs to optimise my solution to problem D of the same contest are also welcomed.
PROBLEM: https://mirror.codeforces.com/contest/2114/problem/D
MY SOLUTION: https://mirror.codeforces.com/contest/2114/submission/321675571
PS: 1) Do tag me in case you figure anything out and you comment on this post.
PS: 2) Would be extremely grateful if you could upvote this(idc about contribution et. al. , but what I want is genuine issues like these to reach to those who can solve this so that we can grow together)








Hello there!
After analyzing your code, the only thing that kinda stuck out to me was the
deloperator. After doing a bit of digging, I came across this page: https://doc.pypy.org/en/latest/cpython_differences.htmlHere, it states that
delcan sometimes execute itself in a non-immediate form. So, I simply removed thedeloperator from your code and it sped up. Here are the specific results on my machine with a generated test:deldelThe only difference between E1.py and E2.py is that E2.py does not the
delcommand.Hey there, hope you doing well!!!
Removing the del did, in fact, reduce the time taken and got me the AC. Reading docs like these often gets very intimidating, and your explanation was very succinct and clear. Thanks a lot for your help!!!
If you could also help in optimising my solution to problem D(linked in the original blog), it would be wonderful!
Thanks once again for all your help and support. Existence of people like you makes CP so fun and engaging.
Uh, the most obvious thing that might be slowing down your code is the
Counterperhaps. As a matter of fact, there is no need to actually use that counter, as yourfind_min_max_secondwould work just fine... After all, you could make it return the smallest value and the second smallest value even if the two are the same. Does that make sense? Then, inside of your loop, you could just do something like this:Notice that I use $$$m_1$$$ and $$$m_2$$$ to generalize the minimum in X, then the minimum in Y, the maximum in X, and finally the maximum in Y. Does that make sense?
I'm not too familiar with the exact complexity of the Counter class, but seeing as it uses python dictionary I believe worst case access can be O(n) instead of O(1) (they use hashtables). So then, just by eliminating that counter, your code should pass...
Yeah what you are saying does make sense. I was trying to code your idea but was getting stuck a lot. I'll try it again and will hopefully upsolve this question.
Thanks a lot for all your help :) .
I had a similar experience in the most recent Edu round, problem 2111D - Creating a Schedule, submissions 322723177 (AC with Python 3.13) and 322721346 (TLE with PyPy 3.10). The submissions include a call to
sort()on a list of length $$$10^5$$$.I would expect the $$$O(n \text{log}n)$$$ worst-case sorting algo to pass this comfortably, but it didn't. So I went to Google to find out if it's really $$$O(n \text{log}n)$$$ worst-case. Then I came across this, saying starting with version 3.11 an improvement was applied to the sorting algo. In the article the changelog is quoted, saying that Most uses of
list.sort()probably won’t see a significant time difference, but may see significant improvements in cases where the former strategy was exceptionally poor. I have yet to read anything further (and not sure whether I will understand if I do), so I am not sure if it is directly related.the issue seems to be your N^2 string building (+= to strings makes a copy every time, so you're copying O(N) chars N times (in a loop) -> O(N^2))
as a proper alternative, use a list
arrfor string fragments, append to it in a loop, and do''.join(arr)at the end to obtain a big concatenated string. (or justprint(*arr))(the patch to timsort youve mentioned usually doesnt manifest significant runtime differences unless you specifically look for it, and it definitely does not change the n log n complexity of the sort)
Oh yes, I was also afraid the string concat made it worse, but somehow thought to switch language first before fixing it. And thank you for the note on timsort!