Say I have some matrix a --
let's say a is n by n, so it looks something like this. You've seen this before,
a 1 1, a 1 2, all the way to a 1 n. When you go down the rows you
get a 2 1, that goes all the way to a 2 n. And let's say that there's some
row here, let's say row i, it looks like a i 1,
all the way to a i n. And then you have some other row
here, a j, it's a j 1 all the way to a j n. And then you keep going all the
way down to a n 1, a n 2, all the way to a n n. This is just an n by n matrix,
and you can see that I took a little trouble to write out my
row a, my i'th row here and my j'th row here. And just to kind of keep things
a little simple, let me just define -- just for
notational purposes, you can view these as row vectors if
you like, but I haven't formally defined row
vectors so I won't necessarily go there. But let's just define the term r
i, we'll call that row i, to be equal to a i 1, a i 2,
all the way to a i n. You can write it as
a vector if you like, like a row vector. We haven't really defined
operations on row vectors that well yet, but I think
you get the idea. We can then replace this guy
with r 1, this guy with r 2, all the way down. Let me do that, and I'll do
that in the next couple of videos because it'll simplify
things, and I think make things a little bit easier
to understand. So I can rewrite this matrix,
this n by n matrix a, I can re-write it as just r i. Actually, this just looks
like a vector, it's just a row vector. Let me write it as a
vector like that. And I'm being a little bit
hand-wavy here because all of our vectors have been defined as
column vectors, but I think you get the idea. So let's call that r 1, and then
we have r 2 is the next row, all the way down. You keep going down, you get
to r i -- that's this row right there -- r i. You keep going down, you get r
j, and then you keep going down until you get
to the n'th row. And each of these guys are going
to have n terms because you have n columns. So that's another
way of writing this same n by n matrix. Now what I'm going to do here
is, I'm going to create a new matrix-- let's call that
swapping the swap matrix of i and j. So I'm going to swap i and
j, those two rows. So what's the matrix
going to look like? Everything else is going
to be equal. You have row 1-- assuming that
1 wasn't one of the i or j's, it could have been. Row 2, all the way down to-- now
instead of a row i there you have a row j there, and you
go down and instead of a row j you have a row i there. And you go down and
then you get r n. So what did we do? We just swapped these
two guys. That's what the swap
matrix is. Now I think it was in the last
video or a couple of videos ago, we learned that if you just
swap two rows of any n by n matrix, the determinant of the
resulting matrix will be the negative of the original
determinant. So we get the determinant of
s, the swap of the i'th and the j rows is going to be equal
to the minus of the determinant of a. Now, let me ask you an
interesting question. What happens if those two rows
were actually the same? What if r i was equal to r j? If we go back to all of these
guys, if that row is equal to this row? That means that this guy is
equal to that guy, that the second column-- the second
column for that row all the way to the n'th guy is equal
to the n'th guy. That's what I mean when I say
what happens if those two rows are equal to each other. Well, if those two rows are
equal to each other, than this matrix is no different than this
matrix here, even though we swapped them. If you swap two identical
things, you're just going to be left with the same
thing again. So if-- let me write this down--
if row i is equal to row j, then this guy,
then s, the swapped matrix, is equal to a. They'll be identical. You're swapping two rows that
are the same thing. So that implies a determinant of
the swapped matrix is equal to the determinant of a. But we just said, if the swap
matrix, when you swap two rows, it equals a negative
of the determinant of a. So this tells us it also has to
equal the negative of the determinant of a. So what does that tell us? That tells us if a has two rows
that are equal to each other, if we swap them, we
should get the negative of the determinant, but if two rows are
equal we're going to get the same matrix again. So if a has two rows that are
equal-- so if row i is equal to row j-- then the determinant
of a has to be equal to the negative of
the determinant of a. We know that because the
determinant of a, or a is the same thing as the swapped
version of a, and the swapped version of a has to have the
negative determinant of a. So these two things
have to be equal. Now what number is equal to a
negative version of itself? If I just told you x is equal
to negative x, what number does x have to be equal to? There's only one value that it
could possibly be equal to. x would have to be equal to 0. So the takeaway here is, let's
say if you have duplicate rows-- you can extend this if
you have three or four rows that are the same-- leads
you to the fact that the determinant of your
matrix is 0. And that really shouldn't
be a surprise. Because if you have duplicate
rows, remember what we learned a long time ago. We learned that a matrix is an
invertible if and only if the reduced row echelon form
is the identity matrix. We learned that. But if you have two duplicate
rows-- let's say these two guys are equal to each other--
you could perform a row operation where you replace this
guy with this guy minus that guy, and you'll just
get a row of 0's. And if you get a row of 0's,
you're never going to be able get the identity matrix. So we know that duplicate rows
could never get reduced row echelon form to be
the identity. Or, duplicate rows are
not invertible. And we also learned that
something is not invertible if and only if its determinant
is equal to 0. So we now got to the same result
two different ways. One, we just used some
of what we learned. When you swap rows, it should
become the negative, but if you swap the same row, you
shouldn't change the matrix. So the determinant of
the matrix has to be the same as itself. So if you have duplicate rows,
the determinant is 0. Which isn't something that we
had to use using this little swapping technique, we could
have gone back to our requirements for invertability--
I think was five or six videos ago. But I just wanted to
point that out. If you see duplicate rows. and actually if you see
duplicate columns-- I'll leave that for you to think about--
if you see duplicate rows or duplicate columns, or even if
you just see that some rows are linear combinations of
other rows-- and I'm not showing that to you right here--
then you know that your determinant is going
to be equal to 0.