Main content

## Linear algebra

### Course: Linear algebra > Unit 2

Lesson 6: More determinant depth- Determinant when row multiplied by scalar
- (correction) scalar multiplication of row
- Determinant when row is added
- Duplicate row determinant
- Determinant after row operations
- Upper triangular determinant
- Simpler 4x4 determinant
- Determinant and area of a parallelogram
- Determinant as scaling factor

© 2023 Khan AcademyTerms of usePrivacy PolicyCookie Notice

# Duplicate row determinant

Determinant of a matrix with duplicate rows. Created by Sal Khan.

## Want to join the conversation?

- At3:30, Sal mentioned that in the last couple of videos, we learnt that det(Sij)= -det(A). May I know where specifically I can find the proof of this result?(64 votes)
- We can do this theorem by induction.

1) This rule holds for all 2x2 matrices.

Let A be the matrix:

and let S be the matrix:`a b`

c d

(note that this is the only way to swap the rows of A)`c d`

a b

Clearly, the determinant of A is ad-bc and the determinant of S is bc-ad, meaning det(S)=-det(A), proving the first part of the theorem.

2) Given that this rule holds for all (m-1)X(m-1) matrices, this rule holds for all mXm matrices.

Let's say we have a mXm matrix A such that Sij is as defined in this video.

A can be represented like this:

Sij can be represented like this:`[rows 1-->(i-1)]`

i

[rows (i+1)-->(j-1)]

j

[rows (j+1)-->m]`[rows 1-->(i-1)]`

j

[rows (i+1)-->(j-1)]

i

[rows (j+1)-->m]

Let's say we take the determinant of A and S about the row j for both matrices. For A, the matrix for each element corresponding to each element in row j will be (with the nth column removed):

For S, this matrix will be:`[rows 1-->(i-1)]`

i

[rows (i+1)-->(j-1)]

[rows (j+1)-->m]`[rows 1-->(i-1)]`

[rows (i+1)-->(j-1)]

i

[rows (j+1)-->m]

Let the former matrix be A[jn] and the latter matrix be Sij[jn]. As you can see, both of these matrices are (m-1)X(m-1) matrices and to transform A[jn] to Sij[jn], j-i-1 swaps were needed, so det(Sij[jn])=(-1)^(j-i-1)*det(A[jn]).

Also, remember that row j is now at row i in Sij. If j-i is even, then (-1)^(i+n)=(-1)^(j+n) for all natural n and there's no difference, but if j-i is odd, then (-1)^(i+n)=(-1)^(j+n)*-1 for all natural n, multiplying the coefficient of det(A) when calculating det(Sij) by -1.

Now let's review:

If j-i is even, then det(Sij)=(-1)^(j-i-1)*det(A). Since j-1 is even, j-i-1 is odd, so det(Sij)=-det(A).

If j-i is odd, then det(Sij)=(-1)^(j-i-i)*-1*det(A). Since j-1 is odd, j-i-1 is even so (-1)^(j-i-1) is 1 and thus multiplying it has no effect, meaning det(Sij)=-det(A).

Thus, this proves the second part of the theorem and since the first part of the theorem holds true, the theorem holds true for all mXm matrices such that m is natural and m>=2.

I hope this helps!(22 votes)

- At3:18he says he addressed row swapping in a recent video. I remember nothing about that. What video was it?(15 votes)
- Explanation and proof of "row swapping" using the definition of a determinant:

"A square array of numbers bordered on the left and right by a vertical line and having a value equal to the algebraic sum of all possible products where the number of factors in each product is the same as the number of rows or columns, each factor in a given product is taken from a different row and column, and the sign of a product is positive or negative depending upon whether the number of permutations necessary to place the indices representing each factor's position in its row or column in the order of the natural numbers is odd or even." (from https://www.merriam-webster.com/dictionary/determinant)

For an nxn matrix the determinant has n! products of n terms, each with its number of permutations of subscripts (permutations of "11 22 33 44 ... nn" based on the row and column of each of its factors. Let's choose (without loss of generality) that the 1st subscript is in the same numerical order for all n! products. The first subscript of the factors in each product then goes through the rows in that order and the second subscript varies in n! permutations so that each factor comes from a different column.

If we exchange two rows, then there is an additional permutation of row subscripts in all n! products, without changing any of the second (column) subscripts. So all the signs of the products are reversed, which reverses the sign of their sum, the determinant.

In a completely similar way, exchanging two columns also reverses the sign of the determinant (Let the second (column) subscript be in the same numerical order for all the products and let the order of the row subscripts vary for each of the n! products).(0 votes)

- does this apply to duplicate columns as well?(5 votes)
- I have the same problem Bowen has, I can't figure out where Sal mentioned that in the last couple of videos, we learnt that det(Sij)= -det(A). Infact i'm somewhat a meticulous learner. Now I've wasted hours but that does not matters. What matters is knowing. Any help please? I wish to see where Khan said it. If it is a mistake, no problem I just want to know.(2 votes)
- It doesn't look like there's a video where he shows it really. Do any of the responses to Bowen's question help?(2 votes)

- Any mistakes here? Any comments?

Explanation and proof of "row swapping" using the definition of a determinant:

"A square array of numbers bordered on the left and right by a vertical line and having a value equal to the algebraic sum of all possible products where the number of factors in each product is the same as the number of rows or columns, each factor in a given product is taken from a different row and column, and the sign of a product is positive or negative depending upon whether the number of permutations necessary to place the indices representing each factor's position in its row or column in the order of the natural numbers is odd or even." (from https://www.merriam-webster.com/dictionary/determinant)

For an nxn matrix the determinant has n! products of n terms, each with its number of permutations of subscripts (permutations of "11 22 33 44 ... nn" based on the row and column of each of its factors. Let's choose (without loss of generality) that the 1st subscript is in the same numerical order for all n! products. The first subscript of the factors in each product then goes through the rows in that order and the second subscript varies in n! permutations so that each factor comes from a different column.

If we exchange two rows, then there is an additional permutation of row subscripts in all n! products, without changing any of the second (column) subscripts. So all the signs of the products are reversed, which reverses the sign of their sum, the determinant.

In a completely similar way, exchanging two columns also reverses the sign of the determinant (Let the second (column) subscript be in the same numerical order for all the products and let the order of the row subscripts vary for each of the n! products).(0 votes)

## Video transcript

Say I have some matrix a --
let's say a is n by n, so it looks something like this. You've seen this before,
a 1 1, a 1 2, all the way to a 1 n. When you go down the rows you
get a 2 1, that goes all the way to a 2 n. And let's say that there's some
row here, let's say row i, it looks like a i 1,
all the way to a i n. And then you have some other row
here, a j, it's a j 1 all the way to a j n. And then you keep going all the
way down to a n 1, a n 2, all the way to a n n. This is just an n by n matrix,
and you can see that I took a little trouble to write out my
row a, my i'th row here and my j'th row here. And just to kind of keep things
a little simple, let me just define -- just for
notational purposes, you can view these as row vectors if
you like, but I haven't formally defined row
vectors so I won't necessarily go there. But let's just define the term r
i, we'll call that row i, to be equal to a i 1, a i 2,
all the way to a i n. You can write it as
a vector if you like, like a row vector. We haven't really defined
operations on row vectors that well yet, but I think
you get the idea. We can then replace this guy
with r 1, this guy with r 2, all the way down. Let me do that, and I'll do
that in the next couple of videos because it'll simplify
things, and I think make things a little bit easier
to understand. So I can rewrite this matrix,
this n by n matrix a, I can re-write it as just r i. Actually, this just looks
like a vector, it's just a row vector. Let me write it as a
vector like that. And I'm being a little bit
hand-wavy here because all of our vectors have been defined as
column vectors, but I think you get the idea. So let's call that r 1, and then
we have r 2 is the next row, all the way down. You keep going down, you get
to r i -- that's this row right there -- r i. You keep going down, you get r
j, and then you keep going down until you get
to the n'th row. And each of these guys are going
to have n terms because you have n columns. So that's another
way of writing this same n by n matrix. Now what I'm going to do here
is, I'm going to create a new matrix-- let's call that
swapping the swap matrix of i and j. So I'm going to swap i and
j, those two rows. So what's the matrix
going to look like? Everything else is going
to be equal. You have row 1-- assuming that
1 wasn't one of the i or j's, it could have been. Row 2, all the way down to-- now
instead of a row i there you have a row j there, and you
go down and instead of a row j you have a row i there. And you go down and
then you get r n. So what did we do? We just swapped these
two guys. That's what the swap
matrix is. Now I think it was in the last
video or a couple of videos ago, we learned that if you just
swap two rows of any n by n matrix, the determinant of the
resulting matrix will be the negative of the original
determinant. So we get the determinant of
s, the swap of the i'th and the j rows is going to be equal
to the minus of the determinant of a. Now, let me ask you an
interesting question. What happens if those two rows
were actually the same? What if r i was equal to r j? If we go back to all of these
guys, if that row is equal to this row? That means that this guy is
equal to that guy, that the second column-- the second
column for that row all the way to the n'th guy is equal
to the n'th guy. That's what I mean when I say
what happens if those two rows are equal to each other. Well, if those two rows are
equal to each other, than this matrix is no different than this
matrix here, even though we swapped them. If you swap two identical
things, you're just going to be left with the same
thing again. So if-- let me write this down--
if row i is equal to row j, then this guy,
then s, the swapped matrix, is equal to a. They'll be identical. You're swapping two rows that
are the same thing. So that implies a determinant of
the swapped matrix is equal to the determinant of a. But we just said, if the swap
matrix, when you swap two rows, it equals a negative
of the determinant of a. So this tells us it also has to
equal the negative of the determinant of a. So what does that tell us? That tells us if a has two rows
that are equal to each other, if we swap them, we
should get the negative of the determinant, but if two rows are
equal we're going to get the same matrix again. So if a has two rows that are
equal-- so if row i is equal to row j-- then the determinant
of a has to be equal to the negative of
the determinant of a. We know that because the
determinant of a, or a is the same thing as the swapped
version of a, and the swapped version of a has to have the
negative determinant of a. So these two things
have to be equal. Now what number is equal to a
negative version of itself? If I just told you x is equal
to negative x, what number does x have to be equal to? There's only one value that it
could possibly be equal to. x would have to be equal to 0. So the takeaway here is, let's
say if you have duplicate rows-- you can extend this if
you have three or four rows that are the same-- leads
you to the fact that the determinant of your
matrix is 0. And that really shouldn't
be a surprise. Because if you have duplicate
rows, remember what we learned a long time ago. We learned that a matrix is an
invertible if and only if the reduced row echelon form
is the identity matrix. We learned that. But if you have two duplicate
rows-- let's say these two guys are equal to each other--
you could perform a row operation where you replace this
guy with this guy minus that guy, and you'll just
get a row of 0's. And if you get a row of 0's,
you're never going to be able get the identity matrix. So we know that duplicate rows
could never get reduced row echelon form to be
the identity. Or, duplicate rows are
not invertible. And we also learned that
something is not invertible if and only if its determinant
is equal to 0. So we now got to the same result
two different ways. One, we just used some
of what we learned. When you swap rows, it should
become the negative, but if you swap the same row, you
shouldn't change the matrix. So the determinant of
the matrix has to be the same as itself. So if you have duplicate rows,
the determinant is 0. Which isn't something that we
had to use using this little swapping technique, we could
have gone back to our requirements for invertability--
I think was five or six videos ago. But I just wanted to
point that out. If you see duplicate rows. and actually if you see
duplicate columns-- I'll leave that for you to think about--
if you see duplicate rows or duplicate columns, or even if
you just see that some rows are linear combinations of
other rows-- and I'm not showing that to you right here--
then you know that your determinant is going
to be equal to 0.