Main content

# Analysis of selection sort

Selection sort loops over indices in the array; for each index, selection sort calls

`indexOfMinimum`

and `swap`

. If the length of the array is n, there are n indices in the array.Since each execution of the body of the loop runs two lines of code, you might think that 2, n lines of code are executed by selection sort. But it's not true! Remember that

`indexOfMinimum`

and `swap`

are functions: when either is called, some lines of code are executed.How many lines of code are executed by a single call to

`swap`

? In the usual implementation, it's three lines, so that each call to `swap`

takes constant time.How many lines of code are executed by a single call to

`indexOfMinimum`

? We have to account for the loop inside `indexOfMinimum`

. How many times does this loop execute in a given call to `indexOfMinimum`

? It depends on the size of the subarray that it's iterating over. If the subarray is the whole array (as it is on the first step), the loop body runs n times. If the subarray is of size 6, then the loop body runs 6 times.For example, let's say the whole array is of size 8 and think about how selection sort works.

- In the first call of
`indexOfMinimum`

, it has to look at every value in the array, and so the loop body in`indexOfMinimum`

runs 8 times. - In the second call of
`indexOfMinimum`

, it has to look at every value in the subarray from indices 1 to 7, and so the loop body in`indexOfMinimum`

runs 7 times. - In the third call, it looks at the subarray from indices 2 to 7; the loop body runs 6 times.
- In the fourth call, it looks at the subarray from indices 3 to 7; the loop body runs 5 times.
- …
- In the eighth and final call of
`indexOfMinimum`

, the loop body runs just 1 time.

If we total up the number of times the loop body of

`indexOfMinimum`

runs, we get 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 = 36 times.### Side note: Computing summations from 1 to n

How do you compute the sum 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 quickly? Here's a trick. Let's add the numbers in a sneaky order. First, let's add 8 + 1, the largest and smallest values. We get 9. Then, let's add 7 + 2, the second-largest and second-smallest values. Interesting, we get 9 again. How about 6 + 3? Also 9. Finally, 5 + 4. Once again, 9! So what do we have?

There were four pairs of numbers, each of which added up to 9. So here's the general trick to sum up any sequence of consecutive integers:

- Add the smallest and the largest number.
- Multiply by the number of pairs.

What if the number of integers in the sequence is odd, so that you cannot pair them all up? It doesn't matter! Just count the unpaired number in the middle of the sequence as half a pair. For example, let's sum up 1 + 2 + 3 + 4 + 5. We have two full pairs (1 + 5 and 2 + 4, each summing to 6) and one "half pair" (3, which is half of 6), giving a total of 2.5 pairs. We multiply 2, point, 5, dot, 6, equals, 15, and we get the right answer.

What if the sequence to sum up goes from 1 to n? We call this an

**arithmetic series**. The sum of the smallest and largest numbers is n, plus, 1. Because there are n numbers altogether, there are n, slash, 2 pairs (whether n is odd or even). Therefore, the sum of numbers from 1 to n is left parenthesis, n, plus, 1, right parenthesis, left parenthesis, n, slash, 2, right parenthesis, which equals n, squared, slash, 2, plus, n, slash, 2. Try out this formula for n, equals, 5 and n, equals, 8.## Asymptotic running-time analysis for selection sort

The total running time for selection sort has three parts:

- The running time for all the calls to
`indexOfMinimum`

. - The running time for all the calls to
`swap`

. - The running time for the rest of the loop in the
`selectionSort`

function.

Parts 2 and 3 are easy. We know that there are n calls to

`swap`

, and each call takes constant time. Using our asymptotic notation, the time for all calls to `swap`

is \Theta, left parenthesis, n, right parenthesis. The rest of the loop in `selectionSort`

is really just testing and incrementing the loop variable and calling `indexOfMinimum`

and `swap`

, and so that takes constant time for each of the n iterations, for another \Theta, left parenthesis, n, right parenthesis time.For part 1, the running time for all the calls to

`indexOfMinimum`

, we've already done the hard part. Each individual iteration of the loop in `indexOfMinimum`

takes constant time. The number of iterations of this loop is n in the first call, then n, minus, 1, then n, minus, 2, and so on. We've seen that this sum, 1, plus, 2, plus, \@cdots, plus, left parenthesis, n, minus, 1, right parenthesis, plus, n is an arithmetic series, and it evaluates to left parenthesis, n, plus, 1, right parenthesis, left parenthesis, n, slash, 2, right parenthesis, or n, squared, slash, 2, plus, n, slash, 2. Therefore, the total time for all calls to `indexOfMinimum`

is some constant times n, squared, slash, 2, plus, n, slash, 2. In terms of big-Θ notation, we don't care about that constant factor, nor do we care about the factor of 1/2 or the low-order term. The result is that the running time for all the calls to `indexOfMinimum`

is \Theta, left parenthesis, n, squared, right parenthesis.Adding up the running times for the three parts, we have \Theta, left parenthesis, n, squared, right parenthesis for the calls to

`indexOfMinimum`

, \Theta, left parenthesis, n, right parenthesis for the calls to `swap`

, and \Theta, left parenthesis, n, right parenthesis for the rest of the loop in `selectionSort`

. The \Theta, left parenthesis, n, squared, right parenthesis term is the most significant, and so we say that the running time of selection sort is \Theta, left parenthesis, n, squared, right parenthesis.Notice also that no case is particularly good or particularly bad for selection sort. The loop in

`indexOfMinimum`

will always make n, squared, slash, 2, plus, n, slash, 2 iterations, regardless of the input. Therefore, we can say that selection sort runs in \Theta, left parenthesis, n, squared, right parenthesis time in all cases.Let's see how the \Theta, left parenthesis, n, squared, right parenthesis running time affects the actual execution time. Let's say that selection sort takes approximately n, squared, slash, 10, start superscript, 6, end superscript seconds to sort n values. Let's start with a fairly small value of n, let's say n, equals, 100. Then the running time of selection sort is about 100, squared, slash, 10, start superscript, 6, end superscript, equals, 1, slash, 100 seconds. That seems pretty fast. But what if n, equals, 1000? Then selection sort takes about 1000, squared, slash, 10, start superscript, 6, end superscript, equals, 1 second. The array grew by a factor of 10, but the running time increased 100 times. What if n, equals, 1, comma, 000, comma, 000? Then selection sort takes 1, comma, 000, comma, 000, squared, slash, 10, start superscript, 6, end superscript, equals, 1, comma, 000, comma, 000 seconds, which is a little more than 11.5

*days*. Increasing the array size by a factor of 1000 increases the running time a million times!This content is a collaboration of Dartmouth Computer Science professors Thomas Cormen and Devin Balkcom, plus the Khan Academy computing curriculum team. The content is licensed CC-BY-NC-SA.

## Want to join the conversation?

- In the Project: Selection Sort Visualiser, do we have to draw the lines manually or using the displayArray() function? If it is using displayArray() function then how? Please help...(14 votes)
- You will need to write the code to draw the lines or arrows yourself. However, you don't want to "hard code" the lines i.e. you should be able to change the array you are sorting, and your code should automatically draw the lines in the proper spot.

As far as the displayArray() function goes, it doesn't really do anything right now. You may want to use it put your code to display your array. Feel free to modify it or add additional functions to complete your project.

Good luck(6 votes)

- I was wondering if someone could elaborate on the statement in the second-to-last paragraph that "no case is particularly good or particularly bad for selection sort." If n is 1, why does that not count as a particularly good case for selection sort? It would still be (n^2 + n/2) iterations, but wouldn't that be like running near O(n) and therefore being really good? I must be confused.(3 votes)
- What it is talking about is, for a given size of the input n, some algorithms will behave poorly or really well depending on how the input is arranged.

Some sorting algorithms, like insertion sort, really like arrays that are sorted or almost sorted. For an already sorted array, insertion sort will run in O(n), which is fantastic, especially when compared to it's worst case running time of O(n^2).

Some sorting algorithms, like quick sort (using the front or back element as a pivot), hate arrays that are nearly sorted or already sorted. For an already sorted array, quick sort degrades to O(n^2), which is quite bad when compared to it's average case running time of O(n log n).

Other sorting algorithms, like selection sort, don't really care what the array looks like. These algorithms will typically perform the same number of steps regardless of what the input looks like. Selection sort will run in O(n^2) regardless of whether the array is already sorted or not.

Hope this makes sense(19 votes)

- I need help with making my text appear on the canvas in the project. How do I make it display the array?(5 votes)
- I decided to come here to get help because I have been waiting over a week to get help on the Selection sort visualizer project, using the Help requests tab on the project page.

I got it to display lines, the final array and rectangles. However, I need help in displaying the initial array and the steps in between. Any help would be greatly appreciated.`//display the Array`

var displayArray = function(array) {

fill(255, 0, 0,0);

stroke(0, 0, 0);

rect(array[0],array[0],array.length*28,array.length*20);

textFont(createFont("monospace"), 12);

fill(255, 0, 0);

text(array, array[0]+3,array[0]+15);

};

//swap function to rearrange the array

var swap = function(array, firstIndex, secondIndex) {

var temp = array[firstIndex];

array[firstIndex] = array[secondIndex];

array[secondIndex] = temp;

};

//find the minimum in the original array

var indexOfMinimum = function(array, startIndex) {

var minValue = array[startIndex];

var minIndex = startIndex;

for(var i = minIndex + 1; i < array.length; i++) {

if(array[i] < minValue) {

minIndex = i;

minValue = array[i];

line(array[i],array[i],array[0],array[0]);

}

}

return minIndex;

};

//sort the function

var selectionSort = function(array) {

var relocate;

for(var i=0; i<array.length;i++){

relocate = indexOfMinimum(array,i);

swap(array, i, relocate);

}

displayArray(array);

};

//display the various arrays

var array = [2,1,3,4,5];

selectionSort(array);

var backward=[450,400,300,280];

selectionSort(backward);

var samething = [250,250,300,200];

selectionSort(samething);

var loneStar = [150,110,200,201];

selectionSort(loneStar);(4 votes) - Theta of n squared!? That really is terrible;

does Selection Sort have any practical uses?(2 votes)- Selection sort shares many of the benefits that insertion sort has, which is also O(n^2):

- It performs well on small inputs. (If n is small it will beat O(n log n) sorts )

- It requires only constant extra space (unlike merge sort)

It also has some extra benefits:

- It's very simple. So, it is easy to program.

- It only requires n swaps (which is better than most sorting algorithms)

- For the same set of elements, it will take the same amount of time regardless of how they are arranged. This can be good for real time applications.

Here are the Cons:

- O(n^2) is slower than O(n log n) algorithms (like merge sort) for large inputs.

- Insertion sort, which is also O(n^2), is usually faster than it on small inputs.

Situations when you want to use it include the following:

- You need a sort algorithm that is easy to program (or that requires a small amount of code)

- You only have a small number of elements to sort, so you feel that it is quick enough

- Swaps are expensive on your hardware, but you don't want to use the more complicated Cycle sort.

- You need the sorting time to be consistent for a given size.

Hope this makes sense(7 votes)

- I am still confused how indexOfMinimum loop is n^2 since it's just a normal for loop, just like the loop of selectionSort function except it starts at minIndex + 1(3 votes)
- Here's the quick and dirty (not entirely accurate) version:

-On average indexOfMinimum loops ~n/2 times every time selectionSort calls it

-selectionSort calls indexOfMinimum ~n times

total number of loops = calls * average loops/call

total number of loops = n * n/2 = 1/2 * n^2

Hope this makes sense(4 votes)

- It is still not clear to compute things, for example.

var selectionSort = function(array)

{

var minimumIndex = 0; ONCE

for (var i = 0; i < array.length; i++) N times

{

minimumIndex = indexOfMinimum(array, i); ? if this is n(n+1)/2

swap(array, i, minimumIndex); 3 Times

}

then it becomes cubic algorithm.

If somebody is willing to clarify this it would be very helpful.

Thanks(3 votes)- The for loop executes n times. The contents of the for loop is:

- indexOfMinimum which is O(n) (at most it searches all n elements to see if it is the min)

- swap is just O(1) (constant time)

So, at most, selections sort requires: n * O(n) which is O(n^2)

n(n+1)/2 is the sum of the series: 1 + 2 + 3 + 4 + ... + n

Which would be the sum of the number of elements that indexOfMinimum has to check for ALL of the iterations of the for loop. (In the above, when I showed that selectionSort was O(n^2) I just assumed that each time it searched all n elements, because it didn't affect the asymptotic analysis)

Hope this makes sense(2 votes)

- At the end of 4th paragraph it says, If the subarray is of size 6, then the loop body of indexOfMinimum will runs 6 times. The way I understood the loop is that it will run 5 times.

Because the starting point is always startIndex + 1. So if startIndex = 0 then it will start from 0+1 = 1 to less than 6. And that's 5 times. That means It will always run n-1 times.

I think that's a mistake, Please explain if i am wrong.(3 votes)- It depends on the implementation used to find the minimum value. If you start at startIndex, which is valid but less efficient, it takes n loops, but if you start at startIndex+1, it takes n-1 loops. At the end of the day, whether it takes n or n-1 loops is not terribly important. What is important is that the number of loops is on the order of n.(2 votes)

- Am I the only one who can not get their project submitted here? I am done with it and I have tried submitting it tons of times. I tried everything and it just won't let me submit it.(3 votes)
- If you run selectionSort on the array [5,2,3,4,1]

after the first iteration the array is sorted does

the program go on to swap elements with themselves?

How to code to guard against this?(2 votes)- Yes, selectionSort would continue to operate as normal (swapping each element with the minimum element >= its index), as it would be unaware that the rest of the array was sorted.

One could add a check to see if the subarray was sorted, (just add for loop to check that each element >= the previous), which would take linear time i.e. O(n). So, while it would add computational time, the overall algorithm would still be O(n^2).

Normally, one wouldn't bother doing this with selection sort. Why?

If one thought that their arrays had large sorted chunks, they would switch to using insertion sort, or even the often maligned bubble sort. Insertion sort and bubble sort run in almost linear time i.e. O(n) on mostly sorted arrays.(1 vote)