# Why My "Optimized" Code Ran Slower Than the Less Efficient One↵
↵
If you spend enough time solving problems ( "I Know you have that's why you are here" ) ↵
on platforms like Codeforces, you eventually run into a moment that makes you question everything you thought you understood about optimization.↵
↵
Recently, I solved a problem in two different ways:↵
↵
* **Version 1:** A straightforward sorting-based approach with **O(n log n)** time complexity.↵
* **Version 2:** A more refined counting-based approach with **O(n)** time complexity.↵
↵
Naturally, I expected the optimized version to be faster.↵
↵
It wasn’t.↵
↵
In fact, the supposedly *better* algorithm reported a **higher runtime** on the judge.↵
↵
Referring to the image attached:↵
↵
| Submission | Time Complexity | Space Complexity | Runtime | Memory |↵
| ------------------------------------ | --------------- | ---------------- | ---------- | ----------- |↵
| **Top (Optimized Version)** | **O(n)** | **O(n)** | **530 ms** | **1300 KB** |↵
| **Bottom (Initial Sorting Version)** | **O(n log n)** | **O(n)** | **468 ms** | **700 KB** |↵
↵
At first glance, that feels wrong. If the time complexity is lower, shouldn’t the runtime also be lower?↵
↵
That assumption is where many developers and especially beginners in DSA one of the most important lessons in algorithmic thinking:↵
↵
> **Big-O complexity predicts growth, not exact runtime.**↵
↵
---↵
↵
## What We Think Happens↵
↵
Most of us learn complexity like this:↵
↵
* O(n) is better than O(n log n)↵
* O(n log n) is better than O(n²)↵
* Therefore lower complexity = faster code↵
↵
That logic is directionally correct.↵
↵
But only when all else is equal.↵
↵
And in real execution environments, **all else is rarely equal.**↵
↵
---↵
↵
## Why the "Worse" Algorithm Can Run Faster↵
↵
### 1. Input Size Matters More Than You Think↵
↵
Many competitive programming problems have surprisingly small constraints.↵
↵
If `n` is tiny : say 100 or 200 then:↵
↵
* `O(n)` and `O(n log n)` may execute in nearly the same time↵
* The difference becomes too small to matter in practice↵
↵
For small inputs, asymptotic growth hasn’t had enough room to show itself.↵
↵
---↵
↵
### 2. Constant Factors Are Real↵
↵
Big-O hides constants.↵
↵
That means these two algorithms:↵
↵
* `1000 * O(n)`↵
* `2 * O(n log n)`↵
↵
may behave very differently for practical input sizes.↵
↵
A theoretically superior algorithm can still lose if:↵
↵
* It has more branching↵
* It performs more checks↵
* It uses less optimized operations↵
* It introduces overhead through custom logic↵
↵
Meanwhile, built-in library functions like sorting are often heavily optimized at the JVM/runtime level.↵
↵
---↵
↵
### 3. Online Judge Runtime Is Not Perfectly Deterministic↵
↵
The runtime shown on coding platforms is influenced by more than just your code:↵
↵
* Shared server load↵
* JVM warm-up / language startup costs↵
* Internal caching↵
* Parallel submissions from other users↵
* Benchmark noise↵
↵
So a difference of a few milliseconds often means very little.↵
↵
---↵
↵
## Then Why Do We Still Care About Big-O?↵
↵
Because Big-O becomes decisive **as scale increases.**↵
↵
A small input may hide the difference.↵
↵
A large input exposes it brutally.↵
↵
What barely matters at `n = 100` can become the reason your solution TLEs at `n = 100000`.↵
↵
That’s why strong engineers and competitive programmers optimize for:↵
↵
> **Scalability first, micro-benchmarks second.**↵
↵
---↵
↵
## The Real Lesson Here↵
↵
The biggest **takeaway from this experience wasn’t about one** Codeforces submission.↵
↵
It was this:↵
↵
> **Optimization is not about blindly chasing lower Big-O.**↵
>↵
> **It’s about understanding context.**↵
↵
Good engineers ask:↵
↵
* *What are the constraints?*↵
* *Does this optimization matter at this scale?*↵
* *What hidden constant factors exist?*↵
* *Is the cleaner solution already efficient enough?*↵
↵
That is a much more mature mindset than simply memorizing complexity rankings.↵
↵
---↵
↵
## What Competitive Programming Quietly Teaches You (IMO)↵
↵
Problems like this build a deeper intuition:↵
↵
You stop thinking:↵
↵
> “Lower complexity always wins.”↵
↵
And start thinking:↵
↵
> “Lower complexity wins when scale justifies it.”↵
↵
That distinction matters far beyond coding platforms.↵
↵
It matters in:↵
↵
* Backend systems↵
* Database optimization↵
* Frontend rendering↵
* Distributed systems↵
* Performance engineering↵
↵
## Here's What I Think All About This : ↵
↵
Sometimes the most valuable part of solving a problem isn’t getting Accepted.↵
↵
It’s discovering why reality doesn’t match the textbook expectation.↵
↵
That’s where understanding begins.↵
↵
If you spend enough time solving problems ( "I Know you have that's why you are here" ) ↵
on platforms like Codeforces, you eventually run into a moment that makes you question everything you thought you understood about optimization.↵
↵
Recently, I solved a problem in two different ways:↵
↵
* **Version 1:** A straightforward sorting-based approach with **O(n log n)** time complexity.↵
* **Version 2:** A more refined counting-based approach with **O(n)** time complexity.↵
↵
Naturally, I expected the optimized version to be faster.↵
↵
It wasn’t.↵
↵
In fact, the supposedly *better* algorithm reported a **higher runtime** on the judge.↵
↵
Referring to the image attached:↵
↵
| Submission | Time Complexity | Space Complexity | Runtime | Memory |↵
| ------------------------------------ | --------------- | ---------------- | ---------- | ----------- |↵
| **Top (Optimized Version)** | **O(n)** | **O(n)** | **530 ms** | **1300 KB** |↵
| **Bottom (Initial Sorting Version)** | **O(n log n)** | **O(n)** | **468 ms** | **700 KB** |↵
↵
At first glance, that feels wrong. If the time complexity is lower, shouldn’t the runtime also be lower?↵
↵
That assumption is where many developers and especially beginners in DSA one of the most important lessons in algorithmic thinking:↵
↵
> **Big-O complexity predicts growth, not exact runtime.**↵
↵
---↵
↵
## What We Think Happens↵
↵
Most of us learn complexity like this:↵
↵
* O(n) is better than O(n log n)↵
* O(n log n) is better than O(n²)↵
* Therefore lower complexity = faster code↵
↵
That logic is directionally correct.↵
↵
But only when all else is equal.↵
↵
And in real execution environments, **all else is rarely equal.**↵
↵
---↵
↵
## Why the "Worse" Algorithm Can Run Faster↵
↵
### 1. Input Size Matters More Than You Think↵
↵
Many competitive programming problems have surprisingly small constraints.↵
↵
If `n` is tiny : say 100 or 200 then:↵
↵
* `O(n)` and `O(n log n)` may execute in nearly the same time↵
* The difference becomes too small to matter in practice↵
↵
For small inputs, asymptotic growth hasn’t had enough room to show itself.↵
↵
---↵
↵
### 2. Constant Factors Are Real↵
↵
Big-O hides constants.↵
↵
That means these two algorithms:↵
↵
* `1000 * O(n)`↵
* `2 * O(n log n)`↵
↵
may behave very differently for practical input sizes.↵
↵
A theoretically superior algorithm can still lose if:↵
↵
* It has more branching↵
* It performs more checks↵
* It uses less optimized operations↵
* It introduces overhead through custom logic↵
↵
Meanwhile, built-in library functions like sorting are often heavily optimized at the JVM/runtime level.↵
↵
---↵
↵
### 3. Online Judge Runtime Is Not Perfectly Deterministic↵
↵
The runtime shown on coding platforms is influenced by more than just your code:↵
↵
* Shared server load↵
* JVM warm-up / language startup costs↵
* Internal caching↵
* Parallel submissions from other users↵
* Benchmark noise↵
↵
So a difference of a few milliseconds often means very little.↵
↵
---↵
↵
## Then Why Do We Still Care About Big-O?↵
↵
Because Big-O becomes decisive **as scale increases.**↵
↵
A small input may hide the difference.↵
↵
A large input exposes it brutally.↵
↵
What barely matters at `n = 100` can become the reason your solution TLEs at `n = 100000`.↵
↵
That’s why strong engineers and competitive programmers optimize for:↵
↵
> **Scalability first, micro-benchmarks second.**↵
↵
---↵
↵
## The Real Lesson Here↵
↵
The biggest **takeaway from this experience wasn’t about one** Codeforces submission.↵
↵
It was this:↵
↵
> **Optimization is not about blindly chasing lower Big-O.**↵
>↵
> **It’s about understanding context.**↵
↵
Good engineers ask:↵
↵
* *What are the constraints?*↵
* *Does this optimization matter at this scale?*↵
* *What hidden constant factors exist?*↵
* *Is the cleaner solution already efficient enough?*↵
↵
That is a much more mature mindset than simply memorizing complexity rankings.↵
↵
---↵
↵
## What Competitive Programming Quietly Teaches You (IMO)↵
↵
Problems like this build a deeper intuition:↵
↵
You stop thinking:↵
↵
> “Lower complexity always wins.”↵
↵
And start thinking:↵
↵
> “Lower complexity wins when scale justifies it.”↵
↵
That distinction matters far beyond coding platforms.↵
↵
It matters in:↵
↵
* Backend systems↵
* Database optimization↵
* Frontend rendering↵
* Distributed systems↵
* Performance engineering↵
↵
## Here's What I Think All About This : ↵
↵
Sometimes the most valuable part of solving a problem isn’t getting Accepted.↵
↵
It’s discovering why reality doesn’t match the textbook expectation.↵
↵
That’s where understanding begins.↵




