Motivation
I saw that many begginers don't really understand how to compute complexity and make lots of some mistakes in calculations. Even more skilled programers can have mistakes in this topic. I will write more intuitive way to understant time and memory complexity and this will be the main focus of this blog. There will also be more formal way, but in the spoiler.
What is complexity
In simple words, complexity is just number of operations that the program make. For example, if you have a cycle from 1 to 1000 with step 1, it is 1000 operations and if there you make one addition and one multiplication, it is already 3000 (since you make this two operations 1000 times). For memory it works in the same way. If you make an array of 100 elements, it has memory complexity 100. But in fact, it is hard to calculate complexities up to 1 in general case, while it is also not really needed, since compute makes 1 operation really fast. In the same way, 10 operations or 1000 operations. In fact, when we calculate complexities we do not look at constant. It means, that n operations (where n is some variable and has some numeric value) same as 2*n operations, or n+100 operations is same as n operations. That is same as when we made 1647859 operation, we can just say that we made $$$10^6$$$ operations which is more easier to work with. The reason for it is that it is easier to calculate complexities, since you do not need to look for each operation. Also, since programs use variables which have some range, we will use variables as well.
Types of compexitites
$$$O$$$. This notation is used when we want to tell that our program make at most this complexity. It means, that if your program have one loop which takes n operations, it has complexity $$$O(n)$$$, but also has complexity $$$O(n^2)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a\le b$$$.
$$$o$$$. This notation is used when we want to tell that our program make less then this complexity. It means, that if your program have one loop which takes n operations, it can have complexity $$$o(n^2)$$$, but also can have complexity $$$O(n^9)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a<b$$$.
$$$\Theta$$$. This notation is used when we want to tell that our program make same number of operations as this complexity. It means, that if your program have one loop which takes n operations, it has complexity $$$\Theta (n)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a=b$$$.
$$$\Omega$$$. This notation is used when we want to tell that our program make at least the same number of operations as complexity. It means, that if your program have one loop which takes n operations, it has complexity $$$\Omega (n)$$$ or it is also $$$\Omega (1)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a\ge b$$$.
$$$\omega$$$. This notation is used when we want to tell that our program make more operations then complexity. It means, that if your program have one loop which takes n operations, it has complexity $$$\omega (1)$$$ or it is also $$$\omega (log(n))$$$, but not $$$\omega(n)$$$. So, if we say that reall number of operations that program makes will be a and number of operations that complexity says is b, then $$$a> b$$$.
In the remaining blog I will use only big O notation, since it is, in my opinion, the most important and popular.
Evaluating complexities
There are some rules for evaluating complexities and here are some of them:
$$$O(C)=O(1)$$$ for constant C and variables x and y.
$$$O(x)+O(C)=O(x)$$$ for constant C and variables x and y.
$$$O(x)+O(y)=O(x+y)=O(max(x,y))$$$ for variables x and y.
$$$O(x)\cdot O(y)=O(x\cdot y)$$$ for variables x and y.
$$$O(x)+O(x)=O(x)$$$ for variable x.
As we see, computing complexity is same as just solving some math inequality. This is why we need to memorize some cases like harmonic series, to not have wrong thoughts about the complexity.