gkosthefinalboss7's blog

By gkosthefinalboss7, history, 46 hours ago, In English

Hello to people who are reading this!

I have decided to write an entry for I have two things I do not understand.

Question #1: Why do we throw out the constants when we say the complexity of an algorithm? For example, instead of saying O(2n), we just say O(n). Why is this? I cannot understand.

Question #2: What does amortized mean? I've heard that many times and I don't know what it means. Can some of you tell me?

Thank you in advance for your answers!

  • Vote: I like it
  • +5
  • Vote: I do not like it

»
45 hours ago, # |
Rev. 2   Vote: I like it +4 Vote: I do not like it

Claim:I am new and still learning

Question 1:Because this the defination of big O notation,you can search on google,the defination state that if there exist a number c and x0 such that when x>=x0 then cf(x)>=g(x) then we will say that time complexity of g(x) is O(f(x)) for example if after O(2*n) by defination we exist a number 2 such that 2*n>=2*n for n is any given nature number so O(2*n) is equal to O(n),the meaning to define this way is because the constant factor in the current computer will not affect too much we as regard as no affect(because the current computer has very high computing ability),we only care about the trends when n become large

Question 2:Some times when you write a program for example like Digit dp/dfs/graph problem,you will see a nested loop,and we know that most of the nested loop is not good for cp because it will bring O(n^2) and even more O(n^k) so it will bring TLE but sometimes even we write a nested loop but it will not bring O(n^2) on the contrary it's a linear complexity O(n)

Example code from himanshujaju

You can check the blog on this Above is the classic djk algorithm,it's seem like nested loop for O(n^2) but yes it's O(n) but how can found that?The key is we cannot analyze each iteration indepedent we need to see the overall,in the overall each node will be visited exactly one times and the edge of the graph also be visited exactly one times so the complexity is O(V+E) not O(n^2)

  • »
    »
    44 hours ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    For question 2, you can think about that you aren't really making these operations at all, and you are making tough approximations.

    For example, in a nested for loop, Are you really doing $$$O(n^2)$$$ operations? Because you can write a nested for/while loop that contributes to the first for loop to finish.

    Or doing some DFS traversal for each node, are you really executing the $$$O(n+m)$$$ time of the DFS for all nodes? You should view that you may not visit some nodes because you already visited that node, making the answer of the last question: No, you will execute the DFS less times than $$$n$$$, your DFS visit all the nodes? Or just the size of the connected component?

    Maybe other useful examples are Monotonic stacks for instance, How many pop() operations you may exec? At most $$$O(n)$$$. Always think about the operations that you are really doing and what are the upper bound of your operations.

»
38 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

amortize verb [ T ] formal (UK usually amortise) uk /əˈmɔː.taɪz/ us /ˈæm.ɔːr.taɪz/ us /æmˈɔːr.taɪz/

to reduce a debt or cost by paying small regular amounts: They pay monthly loan payments based on a formula that amortizes the debt over 15 years, at 8 percent interest. The economics of a show depend on the number of weeks over which the producer can amortize the start-up costs.

to take a cost, for example the cost of something bought for a business, away from the amount of tax that is paid, in small amounts over a period of time: The value of the machinery is amortized over its estimated useful life.

»
30 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

Amortized time complexity essentially means "if we take the average amount of instructions this operation takes when we do it many times, we get this complexity". So, when you have an operation that runs in amortized complexity $$$O(log n)$$$, you know that when you run it $$$q$$$ times, the code runs in $$$O(qlog(n))$$$, but it isn't guaranteed that each operation actually runs in $$$O(log n)$$$.

»
25 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

1 | because it doesnt matter much. suppose you spend 3n^3 operations. well, okay, the code just takes a couple times more time/resources than n^3 operations. in reality your ability to potentially solve the problem wouldn't change. what is important though is the type of function describing this time. is it O(1)? O(n)? O(2^n)?

because if it is O(n), to go from solving problem with n = 20 to problem with n = 100, you need only 5 times more resources/time, which is fine. but if it is O(2^n), to go from n = 20 to n = 100, you would need too much time/resources, and wouldn't be able to do this.

so in reality what describes how well you can solve the problem isn't the constant, but the function.

2 | if your code does many iterations, sometimes you cant really prove that each iteration works fast, and a lot of the times it's just not true. but turns out in many cases you can show that in total, on average, the iterations work fast. for example you have n iterations, and each one of them can potentially spend up to n time. but you can prove based on some idea that still, in total sum of operations is bounded not just by n^2, but also by O(n)