If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Categorizing run time efficiency

It's important to understand the run time efficiency of the algorithms we use. When a computer takes too long to solve a problem, it costs more money and deprives the computer of time to spend on other valuable problems. Plus, if the algorithm is used inside a user-facing app, it may result in frustrated users who give up on the app entirely.

Categorizing run times

We can categorize the run time of an algorithm according to how the number of steps increases as the input size increases. Does it always take the same amount of time? That's a constant increase, a very fast run time. Does it always require looking at every possible permutation of the input? That's an exponential increase, a very slow run time. Most run times are somewhere between.

Constant time

When an algorithm runs in constant time, it means that it always takes a fixed number of steps, no matter how large the input size increases.
As an example, consider accessing the first element of a list:
firstPost ← posts[1]
Even if the list grows to be a million items long, that operation will always require a single step.
We can visualize that relationship as a table:
List sizeSteps
11
101
1001
10001
We can also visualize it as a graph:
A line graph with an x axis of 0 to 1000 and a y axis of 0 to 20. Points are plotted at (1, 1), (10, 1), (100, 1), and (1000, 1). Lines are connecting the points.
A constant run time is ideal, but is typically not possible for algorithms that process multiple pieces of data.

Logarithmic time

When an algorithm runs in logarithmic time, it increases proportionally to the logarithm of the input size.
The binary search algorithm is an algorithm that runs in logarithmic time. Read the measuring efficiency article for a longer explanation of the algorithm.
Here's the pseudocode:
PROCEDURE searchList(numbers, targetNumber) {
  minIndex ← 1
  maxIndex ← LENGTH(numbers)
  REPEAT UNTIL (minIndex > maxIndex) {
    middleIndex ← FLOOR((minIndex+maxIndex)/2)
    IF (targetNumber = numbers[middleIndex]) {
      RETURN middleIndex
    } ELSE {
       IF (targetNumber > numbers[middleIndex]) {
         minIndex ← middleIndex + 1
       } ELSE {
         maxIndex ← middleIndex - 1
       }
     }
  }
  RETURN -1
}
The algorithm uses a loop to look at multiple items in the list, but crucially, it does not look at every item in the list. Specifically, it looks at log2n items, where n is the number of items in the list.
We can visualize that relationship in a table:
List sizeSteps
11
104
1007
100010
We can also see that as a graph:
A line graph with an x axis of 0 to 1000 and a y axis of 0 to 20. Points are plotted at (1, 1), (10, 4), (100, 7), and (1000, 10). A curved line connects the points.
The number of steps is definitely increasing as input size increases, but at a very slow rate.
Linear time
When an algorithm grows in linear time, its number of steps increases in direct proportion to the input size.
The aptly-named linear search algorithm runs in linear time.
The pseudocode shows its simplicity compared to binary search:
PROCEDURE searchList(numbers, targetNumber) {
  index ← 1
  REPEAT UNTIL (index > LENGTH(numbers)) {
    IF (numbers[index] = targetNumber) {
      RETURN index
    }
    index ← index + 1
  }
  RETURN -1
}
This time, the loop looks at every item in the list. This exhaustive search is necessary to search for items in an unsorted list, since there's no way to narrow down where a particular item might be. This algorithm will always require at least as many steps as items in the list.
We can see that in table form:
List sizeSteps
11
1010
100100
10001000
Or as a graph:
A line graph with an x axis of 0 to 1000 and a y axis of 0 to 1000. Points are plotted at (1, 1), (10, 10), (100, 100), and (1000, 1000). A straight line connects the points.

Quadratic time

When an algorithm grows in quadratic time, its steps increase in proportion to the input size squared.
Several list sorting algorithms run in quadratic time, like selection sort. That algorithm starts from the front of the list, then keeps finding the next smallest value in the list and swapping it with the current value.
Here's pseudocode for selection sort:
i ← 1
REPEAT UNTIL (i > LENGTH(numbers)) {
  minIndex ← i
  j ← i + 1
  // Find next smallest value
  REPEAT UNTIL (j > LENGTH(numbers)) {
    IF (numbers[j] < numbers[minIndex]) {
      minIndex ← j
    }
    j ← j + 1
  }
  // Swap if new minimum found
  IF (minIndex != i) {
    tempNum ← numbers[minIndex]
    numbers[minIndex] ← numbers[i]
    numbers[i] ← tempNum
  }
  i ← i + 1
}
The important thing to notice about the algorithm is the nested loop: it loops through each items in the list, and loops again through the remaining items for each of those items. In this case, the algorithm ends up looking at 12(n2n) values, where n is the number of items in the list.
This table shows how many items it'd examine for lists of increasing sizes:
List sizeSteps
11
1045
1004950
1000499500
Here's what that looks like in graph form:
A line graph with an x axis of 0 to 1000 and a y axis of 0 to 600,000. Points are plotted at (1, 1), (10, 45), (100, 4950), and (1000, 499500). A curved line goes through the points.
Both the table and the graph show that the number of steps for this algorithm increases at a much faster rate than the previous ones.

Exponential time

When an algorithm grows in superpolynomial time, its number of steps increases faster than a polynomial function of the input size.
An algorithm often requires superpolynomial time when it must look at every permutation of values. For example, consider an algorithm that generates all possible numerical passwords for a given password length.
For a password length of 2, it generates 100 passwords:
00 01 02 03 04 05 06 07 08 09
10 11 12 13 14 15 16 17 18 19
20 21 22 23 24 25 26 27 28 29 
30 31 32 33 34 35 36 37 38 39
40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59  
60 61 62 63 64 65 66 67 68 69  
70 71 72 73 74 75 76 77 78 79  
80 81 82 83 84 85 86 87 88 89  
90 91 92 93 94 95 96 97 98 99  
That algorithm requires at least 102 steps, since there are 10 possibilities for each digit (0-9) and it must try out every possibility for each of the 2 digits.
For any given password length of n, the algorithm requires 10n steps. That run time increases incredibly quickly, since each additional digit requires 10 times the number of steps.
This table shows how fast that grows for just the first 5 digits:
DigitsSteps
110
2100
31000
410000
5100000
Here's what that looks like as a graph:
A line graph with an x axis of 0 to 50 and a y axis of 0 to 100,000. Points are plotted at (1, 10), (2, 100), (3, 1000), (4, 10000), and (5, 100000). A curved line goes through the points.

All together now

Now that we've seen examples of possible run times for algorithms, let's compare them on a graph:
A graph with an x axis of 0 to 20 and a y axis of 0 to 30. 5 lines are plotted on the graph, each with different labels and shapes. A straight line goes across the bottom of the graph, where y is 1, and is labeled "1". A curved line grows very slowly across the bottom of the graph and is labeled "log2(n)". A straight line goes diagonally across the graph and is labeled "n". A curved line grows quickly and is labeled "n^2". Another curved line grows even more quickly and is labeled "10^n".
That graph makes it even more clear that there's a definite difference in these run times, especially as input size increases. Since modern computer programs increasingly deal with large data sets (like from millions of users or sensors), the run time efficiency matters quite a bit.

Polynomial vs. superpolynomial

Computer scientists often classify run times into two classes:
  • Polynomial time describes any run time that does not increase faster than nk, which includes constant time (n0), logarithmic time (log2n), linear time (n1), quadratic time (n2), and other higher degree polynomials (like n3).
  • Superpolynomial time describes any run time that does increase faster than nk, and includes exponential time (2n), factorial time (n!), and anything else faster.
Why do we make the split between polynomial and superpolynomial?
This table of run times illustrates why:
10501003001000
5n5025050015005000
n2100250010000900001 million(7 digits)
n310001250001 million(7 digits)27 million(8 digits)1 billion(10 digits)
2n102416-digitnumber31-digitnumber91-digitnumber302-digitnumber
n!3.6 million(7 digits)65-digitnumber161-digitnumber623-digitnumberunimaginablylarge
Those numbers have no units, so an n! algorithm may run in 3.6 million nanoseconds for an n of 10, which is less than a second. For an n of 50, it'd run in 3×1064 nanoseconds—but there have only been 1026 nanoseconds since the Big Bang! A superpolynomial run time often requires more time than available in the universe, even for relatively small input sizes.
This is why we think of polynomial run times as reasonable and superpolynomial times as unreasonable. A polynomial run time isn't always ideal (and we often try to improve those times), but it is at least feasible.
Computer scientists concentrate their efforts on finding polynomial time algorithms for the problems that currently require superpolynomial time, since that's where the difference matters the most.

Further learning

"Asymptotic analysis" is a more formal method for analyzing algorithmic efficiency. That's not covered here, but if you're interested, you can learn more about Asymptotic analysis from our college-level Algorithms course.

🙋🏽🙋🏻‍♀️🙋🏿‍♂️Do you have any questions about this topic? We'd love to answer—just ask in the questions area below!

Want to join the conversation?

  • blobby green style avatar for user 26issaca
    I literally cannot understand these massive blocks of words
    (45 votes)
    Default Khan Academy avatar avatar for user
  • boggle yellow style avatar for user Joshua
    I tried to manually implement selection sort with [2, 8, 5, 3, 9, 4, 1] using notepad while keeping track of the variable values. When I arrived at -
    '''REPEAT UNTIL (j > LENGTH(numbers)) {
    IF (numbers[j] < numbers[minIndex]) {
    minIndex ← j
    }'''
    , I realized that this loop has no expression that increments 'j' so I presumed that it will infinitely run by that loop. Then, I stopped evaluating the algorithm and just scanned over the following lines of code. In the block -
    '''IF (minIndex != i) {
    tempNum ← numbers[minIndex]
    numbers[i] ← tempNum
    numbers[minIndex] <- tempNum
    }'''
    , given that 'tempNum ← numbers[minIndex]', isn't it that 'numbers[minIndex] <- tempNum' will evaluate to 'numbers[minIndex] <- numbers[minIndex]'? I think the values didn't swap correctly. But this is just mere eyeballing of the code and not practically evaluating it. So there's a high chance that I'm actually wrong about my observations, and I'm really sorry if that's the case.
    (13 votes)
    Default Khan Academy avatar avatar for user
  • hopper cool style avatar for user Arthur
    Do I need to attend a college-level Algorithms course if my goal is to become a web developer? Or it isn't necessary for beginning a career in this field?
    (0 votes)
    Default Khan Academy avatar avatar for user
    • hopper jumping style avatar for user pamela ❤
      It's not generally necessary, no. If your goal is to become a web developer, I would concentrate on learning web development skills and developing a portfolio of projects, like creating apps for hackathons or websites for local non-profits. You may find that you will be asked algorithm questions during an interview, but that is highly dependent on the company. For frontend development, I find that practical performance concerns come up more often than algorithmic complexity concerns (but those concerns do intertwine! ).

      If you do find the time to learn more algorithms: My friend Bianca teaches algorithms and data structures for JavaScript developers on the website Frontend Masters, that might be something to explore as a way to learn those topics. You can also learn some of that here, in our other Algorithms course, under the Computer Science section.
      (17 votes)
  • leaf red style avatar for user layaz7717
    What is n^k? n is any number, right? So then what is k and why is it relevant in terms of polynomial and superpolynomial time?
    (1 vote)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user Abhishek Shah
      Yes, k is any positive integer. The difference between n^k and k^n is best understood by noting that "n" is the input size. Putting the input in the exponent (for super polynomial) creates exponential growth. In contrast, polynomial growth puts a fixed k in the exponent.

      Hope that helps!
      (5 votes)
  • blobby green style avatar for user Johanne  Antoine
    There is an error for the 100 passwords, from the third row onward, the passwords say 21 to 29.
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user michael.relleum
    The paragraph for Linear Time states "This algorithm will always require at least as many steps as items in the list." Shouldn't this be "at most" instead of "at least", because more often than not the element will not be at the end of the list and thus be found earlier? Or did I misunderstood that part?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • hopper cool style avatar for user Iron Programming
    When you aren't in the lesson page (here), and click on this article, it has a lot of mysterious overflow to the right. Basically, a horizontal scrollbar pops up and allows you to scroll a lot to the right - so I am figuring that you forgot to add overflow-x: hidden; or something like that?
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user jwyayj
    For the Quadratic time selection sort pseudocode example, I don't see any incremental increase for i or j. Wouldn't that mean that the algorithm wouldn't work properly? Otherwise I'm not sure what I'm missing.
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user luke.wang
    For the linear search algorithm, it should require half as many steps as the list. Isn't that the average of where most target numbers would be located? Or are we assuming the target number is always the last one?
    (1 vote)
    Default Khan Academy avatar avatar for user
  • eggleston green style avatar for user Jeronimo
    Here's pseudocode for selection sort:
    1. When we swap first we must wright """numbers[minIndex] <- tempNum""" and only then """numbers[i] ← tempNum"""
    IF (minIndex != i) {
    tempNum ← numbers[minIndex]
    numbers[i] ← tempNum
    numbers[minIndex] <- tempNum
    2. And there also must be j++ and i++ in each "repeat" loop
    Am i not right?
    (1 vote)
    Default Khan Academy avatar avatar for user