## Derive a while loop runs in $\Theta( \sqrt{n} )$

I know for a fact that algorithm **A** runs in $\Theta(\sqrt{n})$, but how does one derive that fact?

Algorithm **A**

i = 0 s = 0 while s <= n: s += i i += 1

Here is what I am thinking. We know that **A** is upper-bounded by O(n), as we increment $s$ by more than 1 in each iteration. We also know that it must be lower-bounded by $\log n$, as we increment $s$ with by something less than $2^i$ in each iteration. (Correct me if I am wrong, these are just my thoughts ...).

Now, what else can we say about **A**? How do we derive that its time complexity is $\Theta(\sqrt{n})$?

Each time we increment `i`

, we add `s`

to `i`

. This thus means that after `k`

steps, `s`

has grown to:

k --- \ s = / i --- i=0

This sequence is known. After the *k*-th step, there are *Tk* numbers, with *Tk* a *triangular number* [wiki]. A shorter formula for *Tk* is *Tk=i×(i+1)/2*. *Tk* thus scales *quadratic* with *k*.

The process will thus stop when *Tk* is higher than *n*. We can thus determine that: *Tk > n*, and thus *k×(k+1)/2 > Tk* and thus *k2/2 + k/2 - n > 0*. This is a quadratic equation with as discriminant *d = 1/4 + 2×n*, and thus as (positive) solution *k0=-1/2 + √d*. It thus scales with *√2n*.

**Derive a while loop runs in $\Theta( \sqrt{n} )$,** Each time we increment i , we add s to i . This thus means that after k steps, s has grown to: k --- \ s = / i --- i=0. This sequence is known. After the Photo by apoorv mittal on Unsplash. Big O Notation allows us to measure the time and space complexity of our code. Think of the example of a for loop. You can run it over an array of 5 items and it will run pretty quickly, but if you ran it over an array of 10,000 items then the execution time will be much slower.

To help you in your reasoning you could do experimental tests to count the number of iterations. For example:

for n in range(100000000, 1000000000, 100000000) : i = 0 s = 0 while s <= n: s += i i += 1

and get the following results:

n | iterations| sqrt(n) ------------------------------------- 100000000 | 14143 | 10000.00 200000000 | 20001 | 14142.14 300000000 | 24496 | 17320.51 400000000 | 28285 | 20000.00 500000000 | 31624 | 22360.68 600000000 | 34642 | 24494.90 700000000 | 37418 | 26457.51 800000000 | 40001 | 28284.27 900000000 | 42427 | 30000.00

##### EDIT

As recommended by @WillNess You can learn more about it here

**CS 241, Fall 99: Homework 1 Practice Exercises,** for (i=0; i<=n-1; i++){ for (j=i+1; j<=n-1; j++){ loop body } } Is T1(n) = Theta(T2(n))? If you were given two algorithms A1 with time complexity T1(n) (i) 2 SQRT(n) + log n = Theta(SQRT(n)); (j) SQRT(n) + log n = Omega(1); (k) SQRT(n) + log n If we think about the divide and combine steps together, the Θ (1) \Theta(1) Θ (1) \Theta, left parenthesis, 1, right parenthesis running time for the divide step is a low-order term when compared with the Θ (n) \Theta(n) Θ (n) \Theta, left parenthesis, n, right parenthesis running time of the combine step.

As you can see from the code `s = 1 + 2 + 3 + .... + i`

(1) and `s <= n`

(2). The first equation can also be written as `s = i * (i + 1) / 2`

which means that in `i`

is approximately `sqrt(s * 2)`

and `sqrt(n * 2)`

, and as we see from the code the `while`

loop runs `i`

time, each does a `O(1)`

calculation. Therefore the overall complexity is `O(sqrt(n))`

**[PDF] 1.8 Iterative Loops,** 2.while loop: used when you want to iterate is met, or when you know in advance how many loops to run y = x2 – 5x + 11 from x = 0 to x = some number n. function reproot(x) n = 0 while n < 20 x = sqrt(x) n = n + 1 end println(x) end Read and learn for free about the following article: Big-θ (Big-Theta) notation If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

**Capacity Upper Bounds for Deletion-type Channels,** We derive the first set of capacity upper bounds for the Poisson-repeat binary symmetric channel) when $d \rightarrow 0$ [28, 29] and $\Theta (1-d)$ Note that, from Stirling's approximation, the asymptotic behavior of (4) is $\Theta (q^y/\sqrt }_2},\ldots ,{B^{\prime }_{n^{\prime }}}$ be the sequence of “even runs” in $X$, Stack Exchange network consists of 177 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

**Drift analysis and average time complexity of evolutionary algorithms,** Based on drift analysis, some useful drift conditions for deriving the time 1, replicates = 10, max.time=5)$'TIME COMPLEXITY RESULTS'$best.model) #> [1] He and Yao [7, 8] derived a very simple yet elegant and tight statement about the optimistic upper bound of 1 on the expected running time, which is, in fact, n. . // In this code, how many times does the while loop run? k = 1 while (k < n){ k += sqrt(k) } One can conclude that that only takes THETA(sqrt(n)) time, using two different arguments. We claim that this must be OMEGA(sqrt(n)), because k needs to go from 1 to n, and it never increments by anything larger than sqrt(n).

**(PDF) The Complexity of Estimating R\'enyi Entropy,** It was recently shown that estimating the Shannon entropy $H(p)$ of and other applications, requires only $\Theta(\sqrt{k})$ samples. The estimators achieving these bounds are simple and run in time and use them to derive additive-accuracy estimators for R´enyi entropy. In particular, the statement. I'm trying to derive the simplified asymptotic running time in Theta() notation for the two nested loops below. For i <-- 1 to n do j <-- 1 while j*j <= i j = j+1 For i <-- 1 to n j <-- 2 while j <= n j = j*j I have been trying to solve this using the summation method but not really getting the right answer.