swarya's blog

By swarya, history, 11 months ago, In English

hey guys,

I'm Swastik Pandey my age is 12 year old and i study in 8th grade

my aim is to give ioi when i will be in 10th grade(2027)

I'm following this roadmap since last 1 years

1)cp-algorithms.com -> learn 2,3 topics everyday 2)codeforces -> to daily practice(40% problemset 60% acmguru) 3)youtube

so can you'll improve this roadmap and help me win IOI by 2027

(some people will not believe and will check my submission

i want to tell you'll that i have given 3 contest 1027 1026 1025)

but in contest no. 1025 i cheated 1 question only 1

and i have apologized for it

so ignore it and check my later contests like 1026 or 1027 and check my problem sets (Thank you to) AttractorsTheory ivanovsergyey temp-for-talk to make me realize in first contest's first question that i was wrong

Full text and comments »

  • Vote: I like it
  • -32
  • Vote: I do not like it

By swarya, history, 11 months ago, In English

Are there any O(1/n) algorithms?

Or anything else which is less than O(1)?

This question isn't as silly as it might seem to some. At least theoretically, something such as O(1/n) is completely sensible when we take the mathematical definition of the Big O notation:

Now you can easily substitute g(x) for 1/x … it's obvious that the above definition still holds for some f.

For the purpose of estimating asymptotic run-time growth, this is less viable … a meaningful algorithm cannot get faster as the input grows. Sure, you can construct an arbitrary algorithm to fulfill this, e.g. the following one:

def get_faster(list): how_long = (1 / len(list)) * 100000 sleep(how_long) Clearly, this function spends less time as the input size grows … at least until some limit, enforced by the hardware (precision of the numbers, minimum of time that sleep can wait, time to process arguments etc.): this limit would then be a constant lower bound so in fact the above function still has runtime O(1).

But there are in fact real-world algorithms where the runtime can decrease (at least partially) when the input size increases. Note that these algorithms will not exhibit runtime behaviour below O(1), though. Still, they are interesting. For example, take the very simple text search algorithm by Horspool. Here, the expected runtime will decrease as the length of the search pattern increases (but increasing length of the haystack will once again increase runtime).

I often use O(1/n) to describe probabilities that get smaller as the inputs get larger -- for example, the probability that a fair coin comes up tails on log2(n) flips is O(1/n).

my veiws on this are-:

Many algorithms can be O(h^p) or O(n^{-p}) depending on whether we're talking about step-size (h) or number of divisions (n). For example, in Euler's method, you look for an estimate of y(h) given that you know y(0) and dy/dx (the derivative of y). Your estimate of y(h) is more accurate the closer h is to 0. So in order to find y(x) for some arbitrary x, one takes the interval 0 to x, splits it up until n pieces, and runs Euler's method at each point, to get from y(0) to y(x/n) to y(2x/n), and so on.

So Euler's method is then an O(h) or O(1/n) algorithm, where h is typically interpreted as a step size and n is interpreted as the number of times you divide an interval.

You can also have O(1/h) in real numerical analysis applications, because of floating point rounding errors. The smaller you make your interval, the more cancellation occurs for the implementation of certain algorithms, more loss of significant digits, and therefore more error, which gets propagated through the algorithm.

For Euler's method, if you are using floating points, use a small enough step and cancellation and you're adding a small number to a big number, leaving the big number unchanged. For algorithms that calculate the derivative through subtracting from each other two numbers from a function evaluated at two very close positions, approximating y'(x) with (y(x+h) — y(x) / h), in smooth functions y(x+h) gets close to y(x) resulting in large cancellation and an estimate for the derivative with fewer significant figures. This will in turn propagate to whatever algorithm you require the derivative for (e.g., a boundary value problem).

if you've seen this bog then please see my other blog too it is much more interesting

(before downvoting tell me what's wrong with my question and what should i do now)

Full text and comments »

  • Vote: I like it
  • -19
  • Vote: I do not like it

By swarya, history, 11 months ago, In English

this whole time we will think of n as 2 i mean int n =2; took a array of 2 elements let's suppose [3,1]

now you have to sort it in ascending order

import random

arr = [2, 3] if random.randint(0, 1) == 1: # 50% chance to swap (1 out of 0 or 1) arr[0], arr[1] = arr[1], arr[0] print(arr) it will swap elements in O(1)

and it's sorting so, it's best case is O(1) so, worst case is O(n!)

so actually their is a chance that in first iteration

we will get sorted array

but chances and probablity are 50%

so can we say that O(1) sorting algorithm exist

(before downvoting tell me what's wrong with my question and what should i do now)

Conclusion: after reading all your comments i have realised that we cant say a sorting algorithm is actually a sorting algorithm if it cant sort up to a fixed length

Full text and comments »

  • Vote: I like it
  • -17
  • Vote: I do not like it