Tutorial about time and memory complexity

Revision en4, by Pa_sha, 2024-10-08 01:42:49

Motivation

I saw that many begginers don't really understand how to compute complexity and make lots of some mistakes in calculations. Even more skilled programers can have mistakes in this topic. I will write more intuitive way to understant time and memory complexity and this will be the main focus of this blog. There will also be more formal way, but in the spoiler.

What is complexity

In simple words, complexity is just number of operations that the program make. For example, if you have a cycle from 1 to 1000 with step 1, it is 1000 operations and if there you make one addition and one multiplication, it is already 3000 (since you make this two operations 1000 times). For memory it works in the same way. If you make an array of 100 elements, it has memory complexity 100. But in fact, it is hard to calculate complexities up to 1 in general case, while it is also not really needed, since compute makes 1 operation really fast. In the same way, 10 operations or 1000 operations. In fact, when we calculate complexities we do not look at constant. It means, that n operations (where n is some variable and has some numeric value) same as 2*n operations, or n+100 operations is same as n operations. That is same as when we made 1647859 operation, we can just say that we made $$$10^6$$$ operations which is more easier to work with. The reason for it is that it is easier to calculate complexities, since you do not need to look for each operation. Also, since programs use variables which have some range, we will use variables as well.

Types of compexitites

  • $$$O$$$. This notation is used when we want to tell that our program make at most this complexity. It means, that if your program have one loop which takes n operations, it has complexity $$$O(n)$$$, but also has complexity $$$O(n^2)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a\le b$$$.

  • $$$o$$$. This notation is used when we want to tell that our program make less then this complexity. It means, that if your program have one loop which takes n operations, it can have complexity $$$o(n^2)$$$, but also can have complexity $$$O(n^9)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a<b$$$.

  • $$$\Theta$$$. This notation is used when we want to tell that our program make same number of operations as this complexity. It means, that if your program have one loop which takes n operations, it has complexity $$$\Theta (n)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a=b$$$.

  • $$$\Omega$$$. This notation is used when we want to tell that our program make at least the same number of operations as complexity. It means, that if your program have one loop which takes n operations, it has complexity $$$\Omega (n)$$$ or it is also $$$\Omega (1)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a\ge b$$$.

  • $$$\omega$$$. This notation is used when we want to tell that our program make more operations then complexity. It means, that if your program have one loop which takes n operations, it has complexity $$$\omega (1)$$$ or it is also $$$\omega (log(n))$$$, but not $$$\omega(n)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a> b$$$.

Formal definition (not needed for understanding)

In the remaining blog I will use only big O notation, since it is, in my opinion, the most important and popular.

Evaluating complexities

There are some rules for evaluating complexities and here are some of them:

  • $$$O(C)=O(1)$$$ for constant C and variables x and y.

  • $$$O(x)+O(C)=O(x)$$$ for constant C and variables x and y.

  • $$$O(x)+O(y)=O(x+y)=O(max(x,y))$$$ for variables x and y.

  • $$$O(x)\cdot O(y)=O(x\cdot y)$$$ for variables x and y.

  • $$$O(x)+O(x)=O(x)$$$ for variable x.

Example 1
Example 2
Example 3

As we see, computing complexity is same as just solving some math inequality. This is why we need to memorize some cases like harmonic series, to not have wrong thoughts about the complexity.

More advanced examples

History

 
 
 
 
Revisions
 
 
  Rev. Lang. By When Δ Comment
en11 English Pa_sha 2024-10-29 18:22:40 225
en10 English Pa_sha 2024-10-28 10:13:39 115
en9 English Pa_sha 2024-10-24 15:35:55 1202 (published)
en8 English Pa_sha 2024-10-23 18:06:14 6
en7 English Pa_sha 2024-10-23 00:53:21 1314
en6 English Pa_sha 2024-10-23 00:41:36 2144 Tiny change: 'on value. ' -> 'on value. \n\n### **How to use it**\n\n'
en5 English Pa_sha 2024-10-22 17:05:53 3859
en4 English Pa_sha 2024-10-08 01:42:49 625 Tiny change: 'frac{n}{i}.\n\n</spo' -> 'frac{n}{i}$.\n\n</spo'
en3 English Pa_sha 2024-10-07 14:50:25 9766 Tiny change: 'nding)">\n</spoile' -> 'nding)">\n- $O$: $f(x)=O(g(x))\Longleftrightarrow$\n</spoile'
en2 English Pa_sha 2024-09-29 20:16:09 882
en1 English Pa_sha 2024-09-26 19:10:14 2774 Initial revision (saved to drafts)