swarya's blog

By swarya, history, 11 months ago, In English

Are there any O(1/n) algorithms?

Or anything else which is less than O(1)?

This question isn't as silly as it might seem to some. At least theoretically, something such as O(1/n) is completely sensible when we take the mathematical definition of the Big O notation:

Now you can easily substitute g(x) for 1/x … it's obvious that the above definition still holds for some f.

For the purpose of estimating asymptotic run-time growth, this is less viable … a meaningful algorithm cannot get faster as the input grows. Sure, you can construct an arbitrary algorithm to fulfill this, e.g. the following one:

def get_faster(list): how_long = (1 / len(list)) * 100000 sleep(how_long) Clearly, this function spends less time as the input size grows … at least until some limit, enforced by the hardware (precision of the numbers, minimum of time that sleep can wait, time to process arguments etc.): this limit would then be a constant lower bound so in fact the above function still has runtime O(1).

But there are in fact real-world algorithms where the runtime can decrease (at least partially) when the input size increases. Note that these algorithms will not exhibit runtime behaviour below O(1), though. Still, they are interesting. For example, take the very simple text search algorithm by Horspool. Here, the expected runtime will decrease as the length of the search pattern increases (but increasing length of the haystack will once again increase runtime).

I often use O(1/n) to describe probabilities that get smaller as the inputs get larger -- for example, the probability that a fair coin comes up tails on log2(n) flips is O(1/n).

my veiws on this are-:

Many algorithms can be O(h^p) or O(n^{-p}) depending on whether we're talking about step-size (h) or number of divisions (n). For example, in Euler's method, you look for an estimate of y(h) given that you know y(0) and dy/dx (the derivative of y). Your estimate of y(h) is more accurate the closer h is to 0. So in order to find y(x) for some arbitrary x, one takes the interval 0 to x, splits it up until n pieces, and runs Euler's method at each point, to get from y(0) to y(x/n) to y(2x/n), and so on.

So Euler's method is then an O(h) or O(1/n) algorithm, where h is typically interpreted as a step size and n is interpreted as the number of times you divide an interval.

You can also have O(1/h) in real numerical analysis applications, because of floating point rounding errors. The smaller you make your interval, the more cancellation occurs for the implementation of certain algorithms, more loss of significant digits, and therefore more error, which gets propagated through the algorithm.

For Euler's method, if you are using floating points, use a small enough step and cancellation and you're adding a small number to a big number, leaving the big number unchanged. For algorithms that calculate the derivative through subtracting from each other two numbers from a function evaluated at two very close positions, approximating y'(x) with (y(x+h) — y(x) / h), in smooth functions y(x+h) gets close to y(x) resulting in large cancellation and an estimate for the derivative with fewer significant figures. This will in turn propagate to whatever algorithm you require the derivative for (e.g., a boundary value problem).

if you've seen this bog then please see my other blog too it is much more interesting

(before downvoting tell me what's wrong with my question and what should i do now)

  • Vote: I like it
  • -19
  • Vote: I do not like it

»
11 months ago, hide # |
 
Vote: I like it -33 Vote: I do not like it

O(1/n) would mean, in worst case scenario the time taken will be inverse of n, which itself is wrong, as then it is not the worse case scenario. I am saying this assuming the normal definition of big O used which is the worst case time complexity.

»
11 months ago, hide # |
 
Vote: I like it +3 Vote: I do not like it

If N can be float, then I guess yes, we can always redefine problem solvable in O(N) and give user not N, but A=1/N instead (and mention this in statement). Then complexity will become O(1/A). But if N is integer and N > 1, then it means that complexity will be < O(1), but O(1) is a minimum logical operation, nothing can be faster than this (except perhaps O(0), when the solution is always the same regardless of N).

  • »
    »
    11 months ago, hide # ^ |
    Rev. 2  
    Vote: I like it 0 Vote: I do not like it

    You make an excellent observation on this topic. Redefining the problem A = 1/N to express complexity as O(1/A) is mind-blowing. I didn’t consider it from that angle before and it definitely gives a novel perspective. As you pointed out, for N being an integer and N > 1, O(1/n) does give a computational time lower than O(1), which poses a kernel of logics paradox challenge. I appreciate your mention of O(0) for constant solutions – do you believe any tangible problem exists where this change (without loss of generality A=1/N) is adopted in Competitive Programming or Numerical Analysis? Additionally, would you provide more examples illustrating conditions under which floating-point values might incur such complexities?

    actually i have done language translation here i hope that google translator works well and you understood what i said

    • »
      »
      »
      11 months ago, hide # ^ |
      Rev. 3  
      Vote: I like it 0 Vote: I do not like it

      Well, I don't think this will have any sense in real algorithms, because having larger N usually means having bigger input to process. So unless we are redefining problem constraints as reversed, I don't think there are real algorithms, where for any integer A < B, having A for input will mean more work to do, than having B.

»
11 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

its number of operations ... it will not make sense at all ....it should be a integer atleast