If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Computer science theory

### Course: Computer science theory>Unit 1

Lesson 3: Asymptotic notation

# Asymptotic notation

So far, we analyzed linear search and binary search by counting the maximum number of guesses we need to make. But what we really want to know is how long these algorithms take. We're interested in time, not just guesses. The running times of linear search and binary search include the time needed to make and check guesses, but there's more to these algorithms.
The running time of an algorithm depends on how long it takes a computer to run the lines of code of the algorithm—and that depends on the speed of the computer, the programming language, and the compiler that translates the program from the programming language into code that runs directly on the computer, among other factors.
Let's think about the running time of an algorithm more carefully. We can use a combination of two ideas. First, we need to determine how long the algorithm takes, in terms of the size of its input. This idea makes intuitive sense, doesn't it? We've already seen that the maximum number of guesses in linear search and binary search increases as the length of the array increases. Or think about a GPS. If it knew about only the interstate highway system, and not about every little road, it should be able to find routes more quickly, right? So we think about the running time of the algorithm as a function of the size of its input.
The second idea is that we must focus on how fast a function grows with the input size. We call this the rate of growth of the running time. To keep things manageable, we need to simplify the function to distill the most important part and cast aside the less important parts. For example, suppose that an algorithm, running on an input of size $n$, takes $6{n}^{2}+100n+300$ machine instructions. The $6{n}^{2}$ term becomes larger than the remaining terms, $100n+300$, once $n$ becomes large enough, 20 in this case. Here's a chart showing values of $6{n}^{2}$ and $100n+300$ for values of $n$ from 0 to 100:
We would say that the running time of this algorithm grows as ${n}^{2}$, dropping the coefficient 6 and the remaining terms $100n+300$. It doesn't really matter what coefficients we use; as long as the running time is $a{n}^{2}+bn+c$, for some numbers $a>0$, $b$, and $c$, there will always be a value of $n$ for which $a{n}^{2}$ is greater than $bn+c$, and this difference increases as $n$ increases. For example, here's a chart showing values of $0.6{n}^{2}$ and $1000n+3000$ so that we've reduced the coefficient of ${n}^{2}$ by a factor of 10 and increased the other two constants by a factor of 10:
The value of $n$ at which $0.6{n}^{2}$ becomes greater than $1000n+3000$ has increased, but there will always be such a crossover point, no matter what the constants.
By dropping the less significant terms and the constant coefficients, we can focus on the important part of an algorithm's running time—its rate of growth—without getting mired in details that complicate our understanding. When we drop the constant coefficients and the less significant terms, we use asymptotic notation. We'll see three forms of it: big-$\mathrm{\Theta }$ notation, big-O notation, and big-$\mathrm{\Omega }$ notation.

This content is a collaboration of Dartmouth Computer Science professors Thomas Cormen and Devin Balkcom plus the Khan Academy computing curriculum team. The content is licensed CC-BY-NC-SA.

## Want to join the conversation?

• Please Explain How is f(x) = 4x^2 - 5x + 3 is O(x^2) derived

|f(x)| = |4x^2 – 5x + 3|
<= |4x^2|+ |- 5x| + |3|
<= 4x^2 + 5x + 3, for all x > 0
<= 4x^2 + 5x^2 + 3x^2, for all x > 1
<= 12x^2, for all x > 1

Hence we conclude that f(x) is O(x^2)

Can someone explain the above proof step by step?
i. Why do we take absolute value?
ii. Why and how were all the term replaced by x^2 term? •   By definition, f(n) is O( g(n) ) if:
There exists contants k, N where k > 0, such that for all n > N:
f(n) <= k * g(n)

So to prove that f(x) = 4x^2 - 5x + 3 is O(x^2) we need to show that:
There exists contants k, N where k > 0, such that for all x > N:
f(x) <= k * g(x)
4x^2 - 5x + 3 <= k * x^2

The way we show that is by finding some k and some N that will work.

The basic strategy is:
- break up f(x) into terms
- for each term find some term with a coefficient * x^2 that is clearly equal to or larger than it
- this will show that f(x) <= the sum of the larger x^2 terms
- the coefficient for the sum of the x^2 terms will be our k

Explanation of provided proof:
f(x) = 4x^2 - 5x + 3
a number is always <= its absolute value e.g. -1 <= | -1 | and 2 <= | 2 |
so we can say that:
f(x) <= | f(x) |
f(x) <= |f(x)| = |4x^2 – 5x + 3|
4x^2 + 3 will always be positive, but -5x will be negative for x > 0
so we know that -5x is <= | - 5 x |, so we can say that:
f(x) <= |4x^2|+ |- 5x| + |3|
For x > 0 |4x^2|+ |- 5x| + |3| = 4x^2 + 5x + 3, so we can say that:
f(x) <= 4x^2 + 5x + 3, for all x > 0
Suppose x > 1. Multiply both sides by x to show that x^2 > x
So we can say x <= x^2.
This let's us replace each of our x terms with x^2 so we can say that:
f(x) <= 4x^2 + 5x^2 + 3x^2, for all x > 1
4x^2 + 5x^2 + 3x^2 = 12x^2 so we can say that:
f(x) <= 12x^2, for all x > 1

So our k= 12 and since we had to assume x > 1 we pick N = 1

A slightly different approach that I would use would be:
Suppose N = 1, i.e x > 1 (This usually has all the nice properties you want for simple big-Oh proofs )
f(x) = 4x^2 - 5x + 3
f(x) <= 4x^2 + 5x + 3 ( for x > 1 a positive number * x is larger than a negative number * x)
Since x > 1, we can say x^2 > x (by multiplying both sides of inequality by x)
f(x) <= 4x^2 + 5x^2 + 3x^2
f(x) <= 12x^12
So our k = 12 and N=1 (which we assumed at the beginning)

Hope this makes sense
• Having worked with limits, dropping the n and constant terms make sense to me. As n becomes really huge, 1000n's dinky contribution doesn't matter. But why is the coefficient on n^2 non-essential? Isn't 6n^2 is fundamentally different from n^2 in terms of curve shape, especially if you're looking for functions to bound it like in the next section? Reducing 100,000,000n^2 and .0000001n^2 both to n^2 seems weird to me, if we were trying to compare the two algorithms that contained those terms. •  Here's why we drop the leading coefficient:
-Typically, with algorithms, the coefficient isn't that important. Often, we just want to compare the running times of algorithms that perform the same task. Often, these algorithm will have different dominant terms (i.e. they are in different complexity classes), which will allow us to easily identify which algorithm have faster running times for large values of n
-Calculating the coefficients for the running time of algorithms is often time consuming, error prone and inexact. However, identifying the most significant term for the running time is often straight forward.
-Often, for algorithms in the same complexity class that perform the same task, we would expect the coefficients to be similar (we would expect small differences and improvements between algorithms in the same complexity class )

However, sometimes, the coefficients do matter.
e.g.
We often use n^2 sorting algorithms (like insertion sort) for small values of n
We often use n log n sorting algorithms (like merge sort) for large values of n
Why don't we always use the n log n sorting algorithm?
The n^2 algorithms have small coefficients, and the n log n algorithms have large coefficients. Only when the value of n starts to get large do we see these n^2 algorithms running slower than the n log n algorithms.

So, while asymptotic notation can be a really useful to talk about and compare algorithms, it is definitely not without its limitations.

Hope this makes sense

(a * n^2) + (b * n) + c,

we just calculate the n^2, dropping of the coefficients and less significant terms, right? • This article clearly says that we can ignore the leading coefficient and the remaining terms "ONCE N BECOMES LARGE ENOUGH" ... therefore, isn't it part of the process to determine what values of "n" is large enough?

For example, here are two Big-O notations for a sorting algorithm for a college student to sort their own test scores:

Alg1: O(n) = 0.5n^2
Alg2: O(n) = 20n

Alg2 only beats Alg1 when n is at least 40, yet this article's process would lead one to choose Alg2, even though 99.99% of college students will not take anywhere near 40 tests (not assignments) for a single course; therefore, it would be wiser to use Alg1 to sort their own test scores.

This reminds me of physics, where everything is reduced to a simplistic model in idealistic conditions for a quick understanding of the situation, but whenever physics is applied to the real world, now we have to pay attention to all the small details we ignored previously. So in the same sense, it might be easier to explain this subject by first ignoring the leading coefficients and the lesser powered terms, but the question remains:

Is this what is done in practicality when people make apps, games, etc? • When n is small (<1000), you typically don't care which algorithm you use, as long as it is correct. Even an inefficient algorithm will usually run fast enough (as long as it's not exponential).

When you are at the stage of comparing algorithms running time it is usually because something is already running too slow (probably an inefficient algorithm being used), or because you know you're going to be dealing with a lot of data (values of n > 100,000).

If you are replacing an algorithm, because it is already too slow, you would typically measure its performance against a sample data set before and after the replacement to make sure that replacing it did, in fact, improve things.

When n is small, one may try to replace an algorithm with one that is worse asymptotically, in hopes that it performs better in practice on the small values of n because it has smaller coefficients. Again, in these cases, it is very important to measure their real world performance, to ensure the change did improve things.

In practice, you almost never would know what the actual running time equation is. (Furthermore, even if you did have it, the coefficients would change depending on what computer you used ). You would probably just know that Alg1 is quadratic (perhaps because it has two nested for loops) and that Alg2 is linear (perhaps because it has a single for loop).

Hope this makes sense
• This is how i Understand the Above:
if you feel confused, try reading this

basically, our search algorithm takes sometime to complete its search and give us an answer, the time it takes for the search algorithm increases if we give it a much longer array.

6n^2 + 100n + 300
when:
n=100 ___ 6n^2 = 60000 ___100n = 10000_____300 = 300____total = 70300

n=200 ____
6n^2 = 240000 ___100n = 20000___ 300 = 300____total = 260300

n=300 ___ 6n^2 = 540000 ____100n = 30000____300 = 300____total = 570300

look at the above, which calculates for total time taken for each n, which part of the equation, is bringing in the biggest values? Who is contributing a lot, to total value of time ? ...............6n^2, since this part of the equation contributes the most to the total, 6n^2 becomes the deciding factor for how big the total time gets. Hence if we are analyzing the time taken for any array of any length( which is n) and we dont have to be accurate or exact, we can just analyze the deciding factor, which is 6n^2, hence the reason why, in the above lesson, we focus on 6n^2.
I know that the 6 is removed, yeah, thats because 6 will never change in the equation, mean while n^2 changes depending on the value of n, so why bother with the 6. We can just analyze n^2. • What would happen when a is negative in the equation (a * n^2 + b * n + c) ? What would the rate of growth be?     