Main content

## Computer science

# Functions in asymptotic notation

When we use asymptotic notation to express the rate of growth of an algorithm's running time in terms of the input size n, it's good to bear a few things in mind.

Let's start with something easy. Suppose that an algorithm took a constant amount of time, regardless of the input size. For example, if you were given an array that is already sorted into increasing order and you had to find the minimum element, it would take constant time, since the minimum element must be at index 0. Since we like to use a function of n in asymptotic notation, you could say that this algorithm runs in \Theta, left parenthesis, n, start superscript, 0, end superscript, right parenthesis time. Why? Because n, start superscript, 0, end superscript, equals, 1, and the algorithm's running time is within some constant factor of 1. In practice, we don't write \Theta, left parenthesis, n, start superscript, 0, end superscript, right parenthesis, however; we write \Theta, left parenthesis, 1, right parenthesis.

Now suppose an algorithm took \Theta, left parenthesis, log, start base, 10, end base, n, right parenthesis time. You could also say that it took \Theta, left parenthesis, log, start base, 2, end base, n, right parenthesis time. Whenever the base of the logarithm is a constant, it doesn't matter what base we use in asymptotic notation. Why not? Because there's a mathematical formula that says

for all positive numbers a, b, and n. Therefore, if a and b are constants, then log, start base, a, end base, n and log, start base, b, end base, n differ only by a factor of log, start base, b, end base, a, and that's a constant factor which we can ignore in asymptotic notation.

Therefore, we can say that the worst-case running time of binary search is \Theta, left parenthesis, log, start base, a, end base, n, right parenthesis for any positive constant a. Why? The number of guesses is at most log, start base, 2, end base, n, plus, 1, generating and testing each guess takes constant time, and setting up and returning take constant time. However, as a matter of practice, we often write that binary search takes \Theta, left parenthesis, log, start base, 2, end base, n, right parenthesis time because computer scientists like to think in powers of 2.

There is an order to the functions that we often see when we analyze algorithms using asymptotic notation. If a and b are constants and a, is less than, b, then a running time of \Theta, left parenthesis, n, start superscript, a, end superscript, right parenthesis grows more slowly than a running time of \Theta, left parenthesis, n, start superscript, b, end superscript, right parenthesis. For example, a running time of \Theta, left parenthesis, n, right parenthesis, which is \Theta, left parenthesis, n, start superscript, 1, end superscript, right parenthesis, grows more slowly than a running time of \Theta, left parenthesis, n, squared, right parenthesis. The exponents don't have to be integers, either. For example, a running time of \Theta, left parenthesis, n, squared, right parenthesis grows more slowly than a running time of \Theta, left parenthesis, n, squared, square root of, n, end square root, right parenthesis, which is \Theta, left parenthesis, n, start superscript, 2, point, 5, end superscript, right parenthesis.

The following graph compares the growth of n,n, squared, and n, start superscript, 2, point, 5, end superscript:

Logarithms grow more slowly than polynomials. That is, \Theta, left parenthesis, log, start base, 2, end base, n, right parenthesis grows more slowly than \Theta, left parenthesis, n, start superscript, a, end superscript, right parenthesis for

*any*positive constant a. But since the value of log, start base, 2, end base, n increases as n increases, \Theta, left parenthesis, log, start base, 2, end base, n, right parenthesis grows faster than \Theta, left parenthesis, 1, right parenthesis.The following graph compares the growth of 1, n, and log, start base, 2, end base, n:

Here's a list of functions in asymptotic notation that we often encounter when analyzing algorithms, ordered by slowest to fastest growing:

- \Theta, left parenthesis, 1, right parenthesis
- \Theta, left parenthesis, log, start base, 2, end base, n, right parenthesis
- \Theta, left parenthesis, n, right parenthesis
- \Theta, left parenthesis, n, log, start base, 2, end base, n, right parenthesis
- \Theta, left parenthesis, n, squared, right parenthesis
- \Theta, left parenthesis, n, squared, log, start base, 2, end base, n, right parenthesis
- \Theta, left parenthesis, n, cubed, right parenthesis
- \Theta, left parenthesis, 2, start superscript, n, end superscript, right parenthesis
- \Theta, left parenthesis, n, !, right parenthesis

Note that an exponential function a, start superscript, n, end superscript, where a, is greater than, 1, grows faster than any polynomial function n, start superscript, b, end superscript, where b is any constant.

The list above is not exhaustive, there are many functions with running times not listed there. You'll hopefully run into a few of those in your computer science journey.

This content is a collaboration of Dartmouth Computer Science professors Thomas Cormen and Devin Balkcom, plus the Khan Academy computing curriculum team. The content is licensed CC-BY-NC-SA.

## Want to join the conversation?

- It doesn't mention how the base on a log affects its growth. What does changing the base of a log do to its growth rate? The part about the change-of-base formula says that all logs with constant bases are equal but on the next quiz it is seen that they are not. Why is that so?(15 votes)
- There are some pretty good answers to this... One more way to think can be that.

When we write log...2(100) then we think that**how**many times we can divide 100**into parts****of two**(As seen in binary search algo) .

So when we write log...10(100) then we think that**how**many times we can divide 100**into parts of ten**.

The answer we will get is that we can make way**more parts of two than parts of ten**.

So, log..2(100)>log..10(100) .

But we say, Theta(log..2(n)) = Theta(log..10(n)), here we should also keep in mind that we are talking pf**very very large values**of 'n'(*Asymtotic*) .So for these vary large values of 'n' log..2(n) and log..10(n) tend to become same.(27 votes)

- I'm following the content but i'm unsure as to why the content is heading down this path. What is the reasoning behind learning or even reviewing this? Shouldn't lessons about the various speeds of Algorithms come after we've learned the various algorithms? we were given a snippet of information regarding 2 simple algorithms and then sent into the woods with a hand full of bread crumbs... or did I miss something?(24 votes)
- They use the asymptotic analysis when they discuss each algorithm, but if you wanted to you could:

-skip the asymptotic analysis section

-read the parts that discuss how each algorithm works (ignoring any asymptotic notation)

-skip the analysis of each algorithm

-after learning how each algorithm works, review the asymptotic notation section

-review the analysis section for each algorithm

The asymptotic notation is useful in telling the computer science part of the story. It tells you why one algorithm is better than another algorithm. It tells you why you would even bother learning merge sort and quick sort after learning about other sorting algorithms.(21 votes)

- Θ(n!) would grow faster than Θ(2^n), correct?(15 votes)
- To demonstrate that n! grows faster than 2^n, write them out like this:
`n! = 1 * 2 * 3 * 4 * 5 * 6 * 7 * 8 * ... * n`

2^n = 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * ... * 2

Here's how we can show n! grows faster than 2^n:

If we have n! and 2^n written as above we can see that the ratio of the terms for n>2 is >1 and increasing with larger n.

That is:

(3 * 4 * 5 * ... * n)/(2 * 2 * 2 * ... * 2) >1 and increases with n.

Since this ratio is increasing, the ratio will become larger than the inverse ratio of the terms for n <=2.

That is:

(3 * 4 * 5 * ... * n)/(2 * 2 * 2 * ... * 2) will eventually become greater than (2 * 2)/(1 * 2)

Once that occurs, the ratio between all the terms in n! and 2^n will be >1

Not only will the ratio be >1, but it will also be increasing as n increases

This demonstrates that n! grows faster than 2^n

(Note that: The same logic can be used for n! and c^n where c is any constant)

Another approach is to use an approximation for n!.

Stirling's Approximation says:

n! ~= sqrt(2 * Pi * n) * (n/e)^n

(https://en.wikipedia.org/wiki/Stirling%27s_approximation)

From inspection we can see that even the:

(n/e)^n term grows faster than 2^n

Hope this makes sense(28 votes)

- what consider constant , linear ,polinomial, expotienal growth what is the rule for it?i havent learn log yet(5 votes)
- constant < logarithmic < linear < polynomial < exponential(23 votes)

- The article mentions that "computer scientists like to think in powers of 2." I'm guessing this has something to do with the binary nature of computers, but is there a more significant meaning to this? As someone who is new to computer science, thinking in powers of 10 seems easier to me.(3 votes)
- Modern computers work in binary, but there is a deeper meaning to it. Computer science is, to a large extent, built around the concept of Turing machines, which have a binary alphabet. ( https://en.wikipedia.org/wiki/Turing_machine ). The Church-Turing thesis (not proven) roughly says that if you can compute something, then it is computable on a Turing machine (see https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis), which is why it is used as the foundation of computer science.

Notably, when you see n used in functions for running times, the value of n represents the initial problem size as it would be represented on a Turing machine i.e. the size of the input data in binary. (This helps to make sure we are always comparing apples to apples)

Hope this makes sense(11 votes)

- What is the difference between lg and log notation? Do they mean the same thing or are they different?(4 votes)
- Here they are using:

- lg means log base 2

- log_a is log base a

The bad news:

- If you see log, or lg in other places it may mean something different.

e.g. log is often used to represent: log base 2, log base 10, log base e

e.g. lg is often used to represent: log base 2, log base 10, log base e

- Often, in computer science, log or lg will be assumed to be log base 2 without anyone mentioning it.

The good news:

- If you see Ln then you can be fairly sure it means the natural log

- If you see log_a or lg_a you can be fairly sure it means log base a

- In asymptotic analysis, the base of our logarithms usually don't matter,

(because we can convert from one base to another by multiplying by a constant, and we can typically ignore these constants)(6 votes)

- Could you explain why a^n grows faster than b^n where a and b are constants.It might be possible for certain values of a and b but not for all.For example consider the case where:

1.a^n=100^n where a=100

Plot is here:https://www.wolframalpha.com/input/?i=100^n+from+n%3D0+to+100

2.n^b=n^100 where b=100

Plot is here:https://www.wolframalpha.com/input/?i=n^100+from+n%3D0+to+100

Looking at the plots I think step 2 increases faster than step 1 or am I wrong?(2 votes)- Perhaps taking the log of both equations will clarify things:

log (a^n) = n log a

log (n^b) = b log n

Now, as n increases, it should be clear that, the top equation is growing faster than the bottom one.(10 votes)

**Is there a negative correlation between an algorithm rate of growth and it's efficiency**?

If my my expectation about the negative correlation is correct then the following statement are correct:

1-*Fast*rate of growth means*slow*algorithm. Therefore,*less*efficient algorithm.

2-*Slow*rate of growth means*fast*algorithm. Therefore,*more*efficient algorithm.

For example, in the linear search, the rate of growth is Θ(n), and the binary search, the rate of growth is Θ(lg n). In this example, the linear search has faster rate of growth than binary search which means that the linear search is less efficient than the binary search.

Is my expectation about the negative correlation correct?(3 votes)- Yes.

If you take less time to complete something, then you must have worked faster.

If you take more time to complete something, then you must have worked slower.

Just remember that asymptotic complexity only applies to large values of n. Simpler algorithms, which have worse asymptotic complexity often outperform complex algorithms with better asymptotic complexity when the values of n are small.

e.g.

Insertion sort is O(n^2), and merge sort is O(n log n).

For large values of n (> 100000) merge sort is faster than insertion sort.

For smaller values of n (<100) insertion sort is faster than merge sort.

This does not contradict anything that asymptotic complexity says, because asymptotic complexity only talks about what happens when we have large values of n.(5 votes)

- Hm, maybe I'm missing something here. When you drop constants out, you explained why. But changing a power of 10 to a power of 2 with a reason like "computer scientists like to think in powers of 2", I get confused.(4 votes)
- Hello,

You explain quite well the case of a linear growth of an algorithm. It is actually quite intuitive. As well as it is the constant case. From here it is more or less straightforward to see how can an algorithm grow at a polynomial rate. However, I struggle to understand how can we see or conclude that the growth rate of an algorithm is logarithmic? Could you please give an example? Thanks(2 votes)- You get log_b( n ) growth when you can reduce the problem size to 1/b of its previous size after each step.

Why ?

It will take log_b( n ) steps to reduce the problem size to 1 ( at which point it can be easily solved )

If each step requires only a constant amount of time, the running time will be O( log_b( n ))

The classic example is binary search where you chop the #of possible locations in 1/2 after each guess, yielding a running time of O( log_2( n ) ).

Note: O( log_b(n) ) is the same as O( log(n) )(4 votes)