If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Computer science

Unit 1: Lesson 3

Asymptotic notation

Big-Ω (Big-Omega) notation

Sometimes, we want to say that an algorithm takes at least a certain amount of time, without providing an upper bound. We use big-Ω notation; that's the Greek letter "omega."
If a running time is \Omega, left parenthesis, f, left parenthesis, n, right parenthesis, right parenthesis, then for large enough n, the running time is at least k, dot, f, left parenthesis, n, right parenthesis for some constant k. Here's how to think of a running time that is \Omega, left parenthesis, f, left parenthesis, n, right parenthesis, right parenthesis:
We say that the running time is "big-Ω of f, left parenthesis, n, right parenthesis." We use big-Ω notation for asymptotic lower bounds, since it bounds the growth of the running time from below for large enough input sizes.
Just as \Theta, left parenthesis, f, left parenthesis, n, right parenthesis, right parenthesis automatically implies O, left parenthesis, f, left parenthesis, n, right parenthesis, right parenthesis, it also automatically implies \Omega, left parenthesis, f, left parenthesis, n, right parenthesis, right parenthesis. So we can say that the worst-case running time of binary search is \Omega, left parenthesis, log, start base, 2, end base, n, right parenthesis.
We can also make correct, but imprecise, statements using big-Ω notation. For example, if you really do have a million dollars in your pocket, you can truthfully say "I have an amount of money in my pocket, and it's at least 10 dollars." That is correct, but certainly not very precise. Similarly, we can correctly but imprecisely say that the worst-case running time of binary search is \Omega, left parenthesis, 1, right parenthesis, because we know that it takes at least constant time.
Of course, typically, when we are talking about algorithms, we try to describe their running time as precisely as possible. We provide the examples of the imprecise statements here to help you better understand big-\Omega, big-O, and big-\Theta.

This content is a collaboration of Dartmouth Computer Science professors Thomas Cormen and Devin Balkcom, plus the Khan Academy computing curriculum team. The content is licensed CC-BY-NC-SA.

Want to join the conversation?

• I can't wrap my head around this: why it is correct to say that the for the binary search the running time is Ω(lgn)?

The 1st paragraph clearly says that Ω defines the least running time, and for the binary search the least running time is 1 guess, no matter how big is n, no?

The statement that Θ(f(n)) automatically implies Ω(f(n)) seems incomprehensible to me... What do I get wrong?
• When we use asymptotic notation, unless stated otherwise, we are talking about worst-case running time.
The worst-case running time for binary search is log(n).
Recall:
if f(n) is O(g(n)) this means that f(n) grows asymptotically no faster than g(n)
if f(n) is Ω(g(n)) this means that f(n) grows asymptotically no slower than g(n)
if f(n) is Θ(g(n)) this means that f(n) grows asymptotically at the same rate as g(n)
As a result:
if f(n) is Θ(g(n)) it is growing asymptotically at the same rate as g(n). So we can say that f(n) is not growing asymptotically slower or faster than g(n). But from the above, we can see this means that f(n) is Ω(g(n)) and f(n) is O(g(n)).

Think of O as an upperbound, and Ω as a lower bound.
These bounds can be tight or loose,but we prefer to make them tight as possible.
If we have tight bounds where O and Ω have the same growth rate then we precisely know the growth rate.
If we can precisely give the growth rate then we know Θ.

An analogy is the height of a neighbour.
We can immediately say that the height of our neighbour is upper bounded by 1 million kilometers.
We can also say that the height of our neighbour is lower bounded by 1 nanometer.
These statements aren't very useful, because these bounds are so loose.
However if we gave a lower bound for the height of our neighbour at 5' 5", and an upper bound of 5' 10" we would have a much better idea of how tall our neighbour was.
If we had a lower bound of 5' 8" and an upper bound of 5' 8" we could then say that our neighbour is 5' 8".

So for log(n) we could say:
log(n) is O(n^n) since log(n) grows asymptotically no faster than n^n
log(n) is O(n) since log(n) grows asymptotically no faster than n
log(n) is O(log(n)) since log(n) grows asymptotically no faster than log(n)
We went from loose upper bounds to a tight upper bound

log(n) is Ω(1) since log(n) grows asymptotically no slower than 1
log(n) is Ω(log(log(n))) since log(n) grows asymptotically no slower than log(log(n))
log(n) is Ω(log(n)) since log(n) grows asymptotically no slower than log(n)
We went from loose lower bounds to a tight lower bound

Since we have log(n) is O(log(n)) and Ω(log(n)) we can say that log(n) is Θ(log(n)).

Hope this makes sense
• If we know the big-Theta of a function, is there a reason that we would want to know a different big-O or big-Omega for the function? It just appears that if you have the big-Theta you would know the best big-O and big-Omega so I don't know why you would want to know these values unless they were significantly easier to find than the big-Theta. Does the big-Theta give the best big-O and big-Omega possible?
• If an algorithm is big-Theta of a function, it is also big-O and big-Omega of that function. Normally you would find the big-O and big-Omega, then if they are the same you know the big-Theta.
• My question has already been asked below, and only answered by Cameron, whose explanations I'm not understanding. Stephen Henn said:
"How can that be: 'So we can say that the worst-case running time of binary search is Ω(lg n).' If Ω is the time an algorithm takes at the minimal running time?"
Also jamie_chu78 asked the same question and got the same answer. Is there anyone who can explain in a different manner how Ω, which is the lower bound of running time, can be the worst-case running time? It really does seem like O should be the worst-case since that is the upper bound, or maximum amount of running time that f(n) can take.
• Perhaps this will clear things up.

A GAME
Suppose the following:
You are trapped in a prison and the warden of the prison will only let you go after you play his game.

He shows you two identical looking boxes and tells you:
- One box has between 10 and 20 bugs in it (we'll call this Box A)
- One box has between 30 and 40 bugs in it (we'll call this Box B)
You have to pick one of the boxes and eat the bugs inside the box.
You don't know which box is Box A or which box is Box B.

The warden knows, but doesn't tell you, that Box A actually has 17 bugs in it, and Box B actually has 32 bugs in it.

WHAT WE MEAN BY UPPER AND LOWER BOUND
Let's clarify what we mean by upper bound and lower bound by using Box A as an example.

A lower bound for X is a value that X can not be below.

So, knowing that Box A has between 10 and 20 bugs we can say that:
-Box A can not have less than 5 bugs (this should be obvious, since it is less than 10)
This means that we can say that 5 is a lower bound on the number of bugs in Box A.

We can also say that:
-Box A can not have less than 10 bugs (this should be obvious, since it is equal to 10)
This means that we can say that 10 is a lower bound on the number of bugs in Box A.

Both 5 and 10 are valid lower bounds for the number of bugs in Box A.

However, the lower bound of 10 is much more useful than the lower bound of 5, because it is closer to the actual number of bugs in the box.
We tend to only mention the lower bound that is closest to the actual value, because it is the most useful.

Similarly, an upper bound for X is a value that X can not be above.

We can say that:
-Box A can not have more than 50 bugs (this should be obvious, since it is more than 20)
This means that we can say that 50 is an upper bound on the number of bugs in Box A.

We can also say that:
-Box A can not have more than 20 bugs (this should be obvious, since it is equal to 20)
This means that we can say that 20 is an upper bound on the number of bugs in Box A.

Both 20 and 50 are valid upper bounds for the number of bugs in Box A.

However, the upper bound of 20 is much more useful than the upper bound of 50, because it is closer to the actual number of bugs in the box.
We tend to only mention the upper bound that is closest to the actual value, because it is the most useful.

The actual value of X must always be between the lower bound of X and the upper bound of X.
For Box A the actual number of bugs in Box A must be between our lower and upper bounds for the number of bug in Box A. (This is true since 17 is between 10 and 20)

ANALYZING UPPER AND LOWER BOUNDS IN THE BEST AND WORST CASE
You decide to analyze the upper and lower bounds on the number of bugs you have to eat in the best and worst case scenarios.
(Let's assume that eating less bugs is better than eating more bugs)

The best case scenario: you pick Box A
Since Box A has between 10 and 20 bugs in it:
The lower bound on the number of bugs you have to eat, in the best case scenario, is 10
The upper bound on the number of bugs you have to eat, in the best case scenario, is 20
(In the best case, the actual number of bugs you have to eat is 17, but you don't know this.)

The worst case scenario: you pick Box B
Since Box B has between 30 and 40 bugs in it:
The lower bound on the number of bugs you have to eat, in the worst case scenario, is 30
The upper bound on the number of bugs you have to eat, in the worst case scenario, is 40
(In the worst case, the actual number of bugs you have to eat is 32, but you don't know this.)

So we can see from the above that:
-the best case scenario has both lower and upper bounds
-the worst case scenario has both lower and upper bounds

What we can say (and what often causes people to confuse lower and upper bounds with best case and worse case) is that:
-the upper bound, in the worst case, is as bad as it can possibly get
(the upper bound in the worst case is the absolute upper bound for ALL cases)
-the lower bound, in the best case, is as good as it can possibly get
(the lower bound in the best case is the absolute lower bound for ALL cases)

APPLYING THIS TO BINARY SEARCH
So how does this apply to binary search ?

Let's analyze the best case and worst case for binary search.

In the best case binary search finds its target on the first guess.
Analysis shows that the running time, f(n), will be constant in this scenario.
i.e. f(n) = c1 (where c1 is a constant)
The lower bound on the running time, f(n), in the best case scenario, is Ω(1)
The upper bound on the running time, f(n), in the best case scenario, is O(1)

In the worst case binary search doesn't find its target.
Analysis shows that this will require ~ log(n) guesses.
The running time, f(n), in this scenario, will be:
f(n) = c1 * log(n) + c2 ( where c1 and c2 are constants)
The lower bound on the running time, f(n), in the worst case scenario, is Ω(log (n) )
The upper bound on the running time, f(n), in the worst case scenario, is O(log (n) )

It should be mentioned that, often, for complex algorithms, we don't know what f(n) is in each of the scenarios (similar to how we didn't know the actual number of bugs we would eat in each scenario). However, we can often make simplifying assumptions that let us figure out the upper and lower bounds for f(n) in each scenario (similar to how we could figure out upper and lower bounds for the number of bugs we would have to eat for each scenario).

Hope this makes sense
• The statement that Θ(f(n)) automatically implies Ω(f(n)) seems incomprehensible to me...
• if f(n) is O(g(n)) this means that f(n) grows asymptotically no faster than g(n)
if f(n) is Ω(g(n)) this means that f(n) grows asymptotically no slower than g(n)
if f(n) is Θ(g(n)) this means that f(n) grows asymptotically at the same rate as g(n)

If f(n) is Θ(g(n)) then f(n) does not grow slower than g(n), because it grows asymptotically at the same rate as g(n). Since f(n) does not grow slower than g(n), it implies that f(n) is Ω(g(n)).

Similarly, if f(n) is Θ(g(n)) then f(n) does not grow faster than g(n), because it grows asymptotically at the same rate as g(n). Since f(n) does not grow faster than g(n), it implies that f(n) is O(g(n))
• Shouldn't it say the best case running time for binary search is omega(n). Or did I misunderstand it?
• You'll want to review that section again.

The best case for binary search is we find the target on the very first guess. That takes a constant amount of time. So, in the best case binary search is Ω(1), O(1), which also means it is Θ(1).

On the other hand, in the worst case, where we don't find the target, binary search is Ω(log(n)), O(log(n)), which also means it is Θ(log(n)).
• Hello!
I could get the idea of the upper and lower bounds, using O and Ω, but I didn't get the following sentence:
"you can also say that the worst-case running time of binary search is Ω(1), because it takes at least constant time."
as we know that the worst case in binary search for n-size array will always be log(n)+1 tries.
and in the known case, like the worst case, O(f(n)) = Ω(f(n)) = Θ(f(n)) ; f(n) = ln(n);

and the complexity of the binary search in worst case will never be constant unless the array contains 1 item, right?
• if f(n) is Ω(g(n)) this means that f(n) grows asymptotically no slower than g(n)

log(n) is Ω(1) since log(n) grows asymptotically no slower than 1
log(n) is also Ω(log(n)) since log(n) grows asymptotically no slower than log(n)

So. saying that the worst case running time for binary search is Ω(1) , or Ω(log(n)) are both correct. However, Ω(log(n)) provides a more useful lower bound than Ω(1).
• I still don't understand what is big-0
• Big-O describes how the maximum number of operations a program completes changes with respect to increasingly large inputs.
• Could anyone tell how we could select k=5 as illustrated here:https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/e/quiz--asymptotic-notation in this quiz.

The thing that confuses me is since log​8(​​n)=​lg(n)/​​lg(8)=(1/lg(8))*lg(n) is it permissible to take the value of k as greater than 1 while you could see by the equation that it's constant multiplier is (1/lg(8)) which will always produce an value less than 1?Here 1/lg(8)=1/3=0.33(value less than 1)
• I think what you may be missing is that when one says:
f(n) is O( g(n) )
you are really saying:
for sufficiently large values of n, there exists some constant k > 0, such that:
f(n) <= k * g(n) .

So in the hints, they are picking k to be 5. That is, they are saying:
log(n) <= 5 * log_8(n)
log(n) <= 5 * 1/log(8) * log(n)
log(n) <= 5 * 1/3 * log(n)
log(n) <= 5/3 * log(n) (which is obviously true)

Hope this make sense