If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Big-Ω (Big-Omega) notation

Sometimes, we want to say that an algorithm takes at least a certain amount of time, without providing an upper bound. We use big-Ω notation; that's the Greek letter "omega."
If a running time is Ω(f(n)), then for large enough n, the running time is at least kf(n) for some constant k. Here's how to think of a running time that is Ω(f(n)):
We say that the running time is "big-Ω of f(n)." We use big-Ω notation for asymptotic lower bounds, since it bounds the growth of the running time from below for large enough input sizes.
Just as Θ(f(n)) automatically implies O(f(n)), it also automatically implies Ω(f(n)). So we can say that the worst-case running time of binary search is Ω(log2n).
We can also make correct, but imprecise, statements using big-Ω notation. For example, if you really do have a million dollars in your pocket, you can truthfully say "I have an amount of money in my pocket, and it's at least 10 dollars." That is correct, but certainly not very precise. Similarly, we can correctly but imprecisely say that the worst-case running time of binary search is Ω(1), because we know that it takes at least constant time.
Of course, typically, when we are talking about algorithms, we try to describe their running time as precisely as possible. We provide the examples of the imprecise statements here to help you better understand big-Ω, big-O, and big-Θ.

This content is a collaboration of Dartmouth Computer Science professors Thomas Cormen and Devin Balkcom, plus the Khan Academy computing curriculum team. The content is licensed CC-BY-NC-SA.

Want to join the conversation?

  • leaf green style avatar for user Georgiy Ivankin
    I can't wrap my head around this: why it is correct to say that the for the binary search the running time is Ω(lgn)?

    The 1st paragraph clearly says that Ω defines the least running time, and for the binary search the least running time is 1 guess, no matter how big is n, no?

    The statement that Θ(f(n)) automatically implies Ω(f(n)) seems incomprehensible to me... What do I get wrong?
    (119 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      When we use asymptotic notation, unless stated otherwise, we are talking about worst-case running time.
      The worst-case running time for binary search is log(n).
      Recall:
      if f(n) is O(g(n)) this means that f(n) grows asymptotically no faster than g(n)
      if f(n) is Ω(g(n)) this means that f(n) grows asymptotically no slower than g(n)
      if f(n) is Θ(g(n)) this means that f(n) grows asymptotically at the same rate as g(n)
      As a result:
      if f(n) is Θ(g(n)) it is growing asymptotically at the same rate as g(n). So we can say that f(n) is not growing asymptotically slower or faster than g(n). But from the above, we can see this means that f(n) is Ω(g(n)) and f(n) is O(g(n)).

      Think of O as an upperbound, and Ω as a lower bound.
      These bounds can be tight or loose,but we prefer to make them tight as possible.
      If we have tight bounds where O and Ω have the same growth rate then we precisely know the growth rate.
      If we can precisely give the growth rate then we know Θ.

      An analogy is the height of a neighbour.
      We can immediately say that the height of our neighbour is upper bounded by 1 million kilometers.
      We can also say that the height of our neighbour is lower bounded by 1 nanometer.
      These statements aren't very useful, because these bounds are so loose.
      However if we gave a lower bound for the height of our neighbour at 5' 5", and an upper bound of 5' 10" we would have a much better idea of how tall our neighbour was.
      If we had a lower bound of 5' 8" and an upper bound of 5' 8" we could then say that our neighbour is 5' 8".

      So for log(n) we could say:
      log(n) is O(n^n) since log(n) grows asymptotically no faster than n^n
      log(n) is O(n) since log(n) grows asymptotically no faster than n
      log(n) is O(log(n)) since log(n) grows asymptotically no faster than log(n)
      We went from loose upper bounds to a tight upper bound

      log(n) is Ω(1) since log(n) grows asymptotically no slower than 1
      log(n) is Ω(log(log(n))) since log(n) grows asymptotically no slower than log(log(n))
      log(n) is Ω(log(n)) since log(n) grows asymptotically no slower than log(n)
      We went from loose lower bounds to a tight lower bound

      Since we have log(n) is O(log(n)) and Ω(log(n)) we can say that log(n) is Θ(log(n)).

      Hope this makes sense
      (443 votes)
  • leaf green style avatar for user John Phipps
    If we know the big-Theta of a function, is there a reason that we would want to know a different big-O or big-Omega for the function? It just appears that if you have the big-Theta you would know the best big-O and big-Omega so I don't know why you would want to know these values unless they were significantly easier to find than the big-Theta. Does the big-Theta give the best big-O and big-Omega possible?
    (21 votes)
    Default Khan Academy avatar avatar for user
  • leafers tree style avatar for user Salman Oskooi
    My question has already been asked below, and only answered by Cameron, whose explanations I'm not understanding. Stephen Henn said:
    "How can that be: 'So we can say that the worst-case running time of binary search is Ω(lg n).' If Ω is the time an algorithm takes at the minimal running time?"
    Also jamie_chu78 asked the same question and got the same answer. Is there anyone who can explain in a different manner how Ω, which is the lower bound of running time, can be the worst-case running time? It really does seem like O should be the worst-case since that is the upper bound, or maximum amount of running time that f(n) can take.
    (14 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      Perhaps this will clear things up.

      A GAME
      Suppose the following:
      You are trapped in a prison and the warden of the prison will only let you go after you play his game.

      He shows you two identical looking boxes and tells you:
      - One box has between 10 and 20 bugs in it (we'll call this Box A)
      - One box has between 30 and 40 bugs in it (we'll call this Box B)
      You have to pick one of the boxes and eat the bugs inside the box.
      You don't know which box is Box A or which box is Box B.

      The warden knows, but doesn't tell you, that Box A actually has 17 bugs in it, and Box B actually has 32 bugs in it.

      WHAT WE MEAN BY UPPER AND LOWER BOUND
      Let's clarify what we mean by upper bound and lower bound by using Box A as an example.

      A lower bound for X is a value that X can not be below.

      So, knowing that Box A has between 10 and 20 bugs we can say that:
      -Box A can not have less than 5 bugs (this should be obvious, since it is less than 10)
      This means that we can say that 5 is a lower bound on the number of bugs in Box A.

      We can also say that:
      -Box A can not have less than 10 bugs (this should be obvious, since it is equal to 10)
      This means that we can say that 10 is a lower bound on the number of bugs in Box A.

      Both 5 and 10 are valid lower bounds for the number of bugs in Box A.

      However, the lower bound of 10 is much more useful than the lower bound of 5, because it is closer to the actual number of bugs in the box.
      We tend to only mention the lower bound that is closest to the actual value, because it is the most useful.


      Similarly, an upper bound for X is a value that X can not be above.

      We can say that:
      -Box A can not have more than 50 bugs (this should be obvious, since it is more than 20)
      This means that we can say that 50 is an upper bound on the number of bugs in Box A.

      We can also say that:
      -Box A can not have more than 20 bugs (this should be obvious, since it is equal to 20)
      This means that we can say that 20 is an upper bound on the number of bugs in Box A.

      Both 20 and 50 are valid upper bounds for the number of bugs in Box A.

      However, the upper bound of 20 is much more useful than the upper bound of 50, because it is closer to the actual number of bugs in the box.
      We tend to only mention the upper bound that is closest to the actual value, because it is the most useful.

      The actual value of X must always be between the lower bound of X and the upper bound of X.
      For Box A the actual number of bugs in Box A must be between our lower and upper bounds for the number of bug in Box A. (This is true since 17 is between 10 and 20)

      ANALYZING UPPER AND LOWER BOUNDS IN THE BEST AND WORST CASE
      You decide to analyze the upper and lower bounds on the number of bugs you have to eat in the best and worst case scenarios.
      (Let's assume that eating less bugs is better than eating more bugs)

      The best case scenario: you pick Box A
      Since Box A has between 10 and 20 bugs in it:
      The lower bound on the number of bugs you have to eat, in the best case scenario, is 10
      The upper bound on the number of bugs you have to eat, in the best case scenario, is 20
      (In the best case, the actual number of bugs you have to eat is 17, but you don't know this.)

      The worst case scenario: you pick Box B
      Since Box B has between 30 and 40 bugs in it:
      The lower bound on the number of bugs you have to eat, in the worst case scenario, is 30
      The upper bound on the number of bugs you have to eat, in the worst case scenario, is 40
      (In the worst case, the actual number of bugs you have to eat is 32, but you don't know this.)

      So we can see from the above that:
      -the best case scenario has both lower and upper bounds
      -the worst case scenario has both lower and upper bounds

      What we can say (and what often causes people to confuse lower and upper bounds with best case and worse case) is that:
      -the upper bound, in the worst case, is as bad as it can possibly get
      (the upper bound in the worst case is the absolute upper bound for ALL cases)
      -the lower bound, in the best case, is as good as it can possibly get
      (the lower bound in the best case is the absolute lower bound for ALL cases)

      APPLYING THIS TO BINARY SEARCH
      So how does this apply to binary search ?

      Let's analyze the best case and worst case for binary search.

      In the best case binary search finds its target on the first guess.
      Analysis shows that the running time, f(n), will be constant in this scenario.
      i.e. f(n) = c1 (where c1 is a constant)
      The lower bound on the running time, f(n), in the best case scenario, is Ω(1)
      The upper bound on the running time, f(n), in the best case scenario, is O(1)

      In the worst case binary search doesn't find its target.
      Analysis shows that this will require ~ log(n) guesses.
      The running time, f(n), in this scenario, will be:
      f(n) = c1 * log(n) + c2 ( where c1 and c2 are constants)
      The lower bound on the running time, f(n), in the worst case scenario, is Ω(log (n) )
      The upper bound on the running time, f(n), in the worst case scenario, is O(log (n) )

      It should be mentioned that, often, for complex algorithms, we don't know what f(n) is in each of the scenarios (similar to how we didn't know the actual number of bugs we would eat in each scenario). However, we can often make simplifying assumptions that let us figure out the upper and lower bounds for f(n) in each scenario (similar to how we could figure out upper and lower bounds for the number of bugs we would have to eat for each scenario).

      Hope this makes sense
      (101 votes)
  • starky ultimate style avatar for user Thinh
    The statement that Θ(f(n)) automatically implies Ω(f(n)) seems incomprehensible to me...
    (3 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      if f(n) is O(g(n)) this means that f(n) grows asymptotically no faster than g(n)
      if f(n) is Ω(g(n)) this means that f(n) grows asymptotically no slower than g(n)
      if f(n) is Θ(g(n)) this means that f(n) grows asymptotically at the same rate as g(n)

      If f(n) is Θ(g(n)) then f(n) does not grow slower than g(n), because it grows asymptotically at the same rate as g(n). Since f(n) does not grow slower than g(n), it implies that f(n) is Ω(g(n)).

      Similarly, if f(n) is Θ(g(n)) then f(n) does not grow faster than g(n), because it grows asymptotically at the same rate as g(n). Since f(n) does not grow faster than g(n), it implies that f(n) is O(g(n))
      (11 votes)
  • blobby green style avatar for user Soleiman wani
    I can't understand how they are different from each other and even they are not precise too we are omitting constants and lower power functions in all the three notations and if the value is same so what's the point of discussing three similar things with three different names.
    (3 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      You have an algorithm, and you want to know how fast it runs.

      Big-Oh gives you a ceiling on the time it takes i.e. it can't take any more time than Big-Oh

      Big-Omega gives you a floor on the time it takes i.e. it can't take any less time than Big-Omega.

      The real running time is somewhere between the floor (Big-Omega) and the ceiling (Big-Oh). At first, when you do rough analysis on an algorithm, the floor and ceiling may have a big gap between them, and not be that close to the actual running time. As you do more detailed analysis the gap between the floor and ceiling gets smaller and they start to get closer to the real running time. If you finally squeeze the floor and the ceiling together until they meet, then you will have a good idea of what the running time function is (that's your Big Theta).

      Eliminating the constants and lower powers lets us focus on the things that make a big impact, by ignoring the things which have a small impact. Changing from a 7n^2 to 2n^2 might seem like a big deal (one is 3.5x faster!), but if the values of n are large changing from 7n^2 to 100 * n * log(n) could be 100s of thousands time faster.

      Hope this makes sense
      (9 votes)
  • blobby green style avatar for user ryan.watson
    I still don't understand what is big-0
    (3 votes)
    Default Khan Academy avatar avatar for user
  • orange juice squid orange style avatar for user visitarjun2001
    Shouldn't it say the best case running time for binary search is omega(n). Or did I misunderstand it?
    (2 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      You'll want to review that section again.

      The best case for binary search is we find the target on the very first guess. That takes a constant amount of time. So, in the best case binary search is Ω(1), O(1), which also means it is Θ(1).

      On the other hand, in the worst case, where we don't find the target, binary search is Ω(log(n)), O(log(n)), which also means it is Θ(log(n)).
      (6 votes)
  • blobby green style avatar for user Hayyan
    Hello!
    I could get the idea of the upper and lower bounds, using O and Ω, but I didn't get the following sentence:
    "you can also say that the worst-case running time of binary search is Ω(1), because it takes at least constant time."
    as we know that the worst case in binary search for n-size array will always be log(n)+1 tries.
    and in the known case, like the worst case, O(f(n)) = Ω(f(n)) = Θ(f(n)) ; f(n) = ln(n);

    and the complexity of the binary search in worst case will never be constant unless the array contains 1 item, right?
    (0 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      if f(n) is Ω(g(n)) this means that f(n) grows asymptotically no slower than g(n)

      log(n) is Ω(1) since log(n) grows asymptotically no slower than 1
      log(n) is also Ω(log(n)) since log(n) grows asymptotically no slower than log(n)

      So. saying that the worst case running time for binary search is Ω(1) , or Ω(log(n)) are both correct. However, Ω(log(n)) provides a more useful lower bound than Ω(1).
      (8 votes)
  • mr pink red style avatar for user tonis015
    This article states that Big-Omega is the "worst-case running time", shouldn't it say "best-case running time", since it is the lower bound?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      "best case", "average case", "worst case", etc. describe scenarios. Each of those scenarios has a function describing it's running time.
      e.g.
      Binary search:
      f_bestcase(n) = k1 (I find the target on the first guess) where k1 is a constant
      f_worstcase(n) = k2 * log_2(n) + k3 (After log_2(n) guesses, I've eliminated all the possible choices) where k2 and k3 are constants

      Big O, Big Omega, Big Theta are just ways to describe functions. I can use them to describe f_bestcase(n), or I can use them to describe f_worstcase(n).

      So I can say:
      f_bestcase(n) is Big Omega(1) ,and f_bestcase(n) is Big O(1).
      f_worstcase(n) is Big Omega(log n), and f_worstcase(n) is Big Omega(log n).

      Now, where people get confused is:
      - The Big Omega of the best case scenario, is the lowest (fastest) it can be for ANY scenario.
      - The Big O of the worst case is the highest (slowest) it can be for ANY scenario.

      Normally, when we talk about algorithms, we really only care about the worst case scenario so people will say "binary search is big O (log n)," but what they mean is binary search is big O (log n) in the worst case.

      Hope this makes sense
      (3 votes)
  • blobby green style avatar for user jacobrodri
    Is big omega the largest notation?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • aqualine ultimate style avatar for user Martin
      What do you mean with the largest?
      The Big-Omega notation gives you a lower bound of the running time of an algorithm. So Big-Omega(n) means the algorithms runs at least in n time but could actually take a lot longer.
      The Big-O notation gives you an upper bound so O(n) would mean the algorithm runs in it's worst case in n or linear time.
      (3 votes)