## Big O notation? Would that be O(n)?

**What would be the Big O and Big Omega notation?**

int temp = 0 for (i =0; i < 100000000; i++) for (j = 0; j < N; j++) temp = 0 while(temp < j) { temp++; }

That thing is O(n^2), the innermost while runs from 0 to `j`

, who's upper bound is `N`

.

**A beginner's guide to Big O notation,** Big O notation is used in Computer Science to describe the O(N) describes an algorithm whose performance will grow linearly and in direct� Worst case — being represented as Big O Notation – O(n) Big-O, written as O, is an Asymptotic Notation for the worst case, or ceiling of growth for a given function. It gives us an asymptotic upper

This algorithm's run time complexity is `O(n^2)`

.

the outer loop repeats *constant* number of times (even if the constant is large, it is still a constant. 10^5 is no different than 10^100 in this regard).

The complexity is determined by the two inner loops. The innermost loop repeats j times and j goes from 0 to N.

`0 + 1 + 2 + 3 + ... +N = N(N+1)/2 => complexity function f(n) = 10^9 * N(N+1)/2 => O((f(n)) = N^2.`

This is also `Omega(N^2)`

which means this algorithm is ultimately `Theta(N^2)`

.

Remember that the big O and big Omega notation are about asymptotic limits meaning you will only need to find some random values after which f(n) <= c1g(n) (for upper bound) and f(n) >= c2h(n) (for lower bound). It should be easy to find such values for g(n)=h(n)= n^2.

In the end, you can squeeze f(n) (as defined above) between two c*n^2 functions with different multipliers.

**Big O notation,** To understand what Big O notation is, we can take a look at a typical example, O( n�), which is usually pronounced “Big O squared”. The letter “n”� So, O(6n) becomes O(n) and it implies that the relationship between T(n) and n is linear; if n becomes large, T(n) or computational time will increase linearly. This is what we wanted to know only,

The Big Omega is used for the whole algorithm.

First loop will be executed

`100000000 times`

which is constant value.Second loop would be executed

`N times`

.The third loop is executed

`N times at most`

. (0, 1, 2, 3, ..., N)

The Big omega would be `O(100000000) * O(N) * O(N) = (100000000*N*((N+1)/2)) = O(N^2)`

The algorithm would have quadratic time complexity.

**Big O notation: definition and examples � YourBasic,** We can denote this by a function, call it T, where T(n)=1+n. The parameter n is often Duration: 16:19
Posted: May 16, 2012 Big-O notation just describes asymptotic bounds, so it is correct to say something like, for example, "Quicksort is in O(n!)," even though Quicksort's actual worst-case running time will never exceed O(n^2). All Big-O is saying is "for an input of size n, there is a value of n after which quicksort will always take less than n! steps to complete."

The overall time complexity of this algorithm will be O(n^3), assuming that the value 100 Million in the outermost loop is actually N.

Outermost loop will iterate n times. For every iteration, the first inner loop iterates n times. For every iteration of this inner loop, the second inner loop iterates n times in worst case, n/2 times on an average. So, the order of time complexity is in the order of n x n x n = n^3.

**What is Big O Notation Explained: Space and Time Complexity,** O (n) is Big O Notation and refers to the complexity of a given algorithm. n refers to the size of the input, in your case it's the number of items in your list.

**3.3. Big-O Notation — Problem Solving with Algorithms and Data ,** Big O notation (with a capital letter O, not a zero), also called Landau's symbol, is a symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions.

The Big O notation defines an upper bound of an algorithm, it bounds a function only from above. For example, consider the case of Insertion Sort. It takes linear time in best case and quadratic time in worst case. We can safely say that the time complexity of Insertion sort is O (n^2).

It can't be approximated to O (n) or O (n 2) so it’s a “family” of algorithms itself and it’s also a very common algorithm complexity. You can get it, for example, from a divide-and-conquer algorithm in which you solve subproblem and combine the result. Quicksort, mergesort and heapsort runs in O (n log

##### Comments

- This code won't even compile in any language of which I'm aware. Did you copy this from a whiteboard, and if so, could it be missing a few things?
- Just consider that second loop is inside first and third loop is inside second.
- Please edit your question to at least make it valid pseudo code; don't leave us guessing.
- For the 1st loop, it is O(1), for second it is O(n). What it would be for 3rd loop? Would that be O(1) or O(n)? Thats what is confusing for me.
- Over all the times the third loop runs it'll average 0.5 x n iterations. Since the 0.5 is a constant factor you can remove it and you'll end up with O(n) time complexity for the 3rd loop. The idea is that the first time it iterates 0 times, the last time n times so (0 + n) / 2 = 0.5n times on average. Then the second will iterate 1 time and the second to last will iterate n - 1 times giving (1 + (n - 1)) / 2 = 0.5n times...this continues giving the average 0.5n iterations
- The question also asks about big Omega
- I was correcting and more explained than yours. @blindy
- Why do you say the 3rd loop is never executed? If j==1000, doesn't it run 1000 times before temp reaches j?
*"assuming that the value 100 Million in the outermost loop is actually N."*But it is not; it is 100 million.- I totally understand. My assumption was based on the original poster's poor quality of pseudo code, which does not even compile and he/she is super lazy to edit that also after pointed out by multiple users. I thought he/she actually meant N instead of 100M while copy pasting from somewhere
- By definition, pseudocode is not expected to compile.