I've talked a lot about the idea
that eigenvectors could make for good bases or
good basis vectors. So let's explore that idea
a little bit more. Let's say I have some
transformation. Let's say it's a transformation
from Rn to Rn, and it can be represented
by the matrix, A. So the transformation of x
is equal to the n-by-n matrix, A times x. Now let's say that we have
n linearly independent eigenvectors of A. And this isn't always going to
be the case, but it can often be the case. It's definitely possible. Let's assume that A has
n linearly independent eigenvectors. So I'm going to call them v1,
v2, all the way through vn. Now, n linearly independent
vectors in Rn can definitely be a basis for Rn. We've seen that multiple
times. And what I want to show you in
this video is that this makes a particularly good basis
for this transformation. So let's explore that. So the transformation of each of
these vectors-- I'll write it over here. The transformation of vector 1
is equal to A times vector 1 and since vector 1 is an
eigenvector of A, that's going to be equal to some eigenvalue
lambda 1 times vector 1. We could do that for
all of them. The transformation of vector 2
is equal to A times v2, which is equal to some eigenvalue
lambda 2 times v2. And I'm just going to skip all
of them and just go straight to the nth one. We have n of these
eigenvectors. You might have a lot more. We're just assuming that A
has at least n linearly independent eigenvectors. In general, you could take
scaled up versions of these and they'll also be
eigenvectors. Let's see, so the transformation
of vn is going to be equal to A times vn. And because these are all
eigenvectors, A times vn is just going to be lambda n,
some eigenvalue times the vector, vn. Now, what are these
also equal to? Well, this is equal to, and this
is probably going to be unbelievably obvious to you, but
this is the same thing as lambda 1 times vn plus 0
times v2 plus all the way to 0 times vn. And this right here is going to
be 0 times v1 plus lambda 2 times v2 plus all the way,
0 times all of the other vectors vn. And then this guy down here,
this is going to be 0 times v1 plus 0 times v2 plus 0 times
all of these basis vectors, these eigenvectors, but
lambda n times vn. This is almost stunningly
obvious, right? I just rewrote this as this plus
a bunch of zero vectors. But the reason why I wrote that
is, because in a second, we're going to take this as a
basis and we're going to find coordinates with respect to that
basis, and so this guy's coordinates will be lambda 1,
0, 0, because that's the coefficients on our
basis vectors. So let's do that. So let's say that we define
this as some basis. So B is equal to the set of--
actually, I don't even have to write it that way. Let's say I say that B,
I have some basis B, that's equal to that. What I want to show you is that
when I do a change of basis-- we've seen this before--
in my standard coordinates or in coordinates
with respect to the standard basis, you give me some vector
in Rn, I'm going to multiply it times A, and you're
going to have the transformation of it. It's also going to be in Rn. Now, we know we can do
a change of basis. And in a change of basis, if you
want to go that way, you multiply by C inverse, which
is-- remember, the change of basis matrix C, if you want to
go in this direction, you multiply by C. The change of basis matrix is
just a matrix with all of these vectors as columns. It's very easy to construct. But if you change your basis
from x to our new basis, you multiply it by the
inverse of that. We've seen that multiple
times. If they're all orthonormal, then
this is the same thing as a transpose. We can't assume that, though. And so this is going to
be x in our new basis. And if we want to find some
transformation, if we want to find the transformation matrix
for T with respect to our new basis, it's going to
be some matrix D. And if you multiply D times x,
you're going to get this guy, but you're going to get the B
representation of that guy. The transformation of the vector
x is B representation. And if we want to go back and
forth between that guy and that guy, if we want to go in
this direction, you can multiply this times C, and
you'll just get the transformation of x. And if you want to go in that
direction, you could multiply by the inverse of your change
of basis matrix. We've seen this multiple
times already. But what I've claimed or I've
kind of hinted at is that if I have a basis that's defined by
eigenvectors of A, that this will be a very nice matrix,
that this might be the coordinate system that you want
to operate in, especially if you're going to apply
this matrix a lot. If you're going to do this
transformation on a lot of different things, you're going
to do it over and over and over again, maybe to the same
set, then it maybe is worth the overhead to do the
conversion and just use this as your coordinate system. So let's see that this will
actually be a nice-looking, easy-to-compute-with and
actually diagonal matrix. So we know that the
transformation-- what is the transformation of-- let's
write this in a bunch of different formats. Let me scroll down
a little bit. So if I wanted to write the
transformation of v1 in B coordinates, what would it be? It's just going to be equal to--
well, these are the basis vectors, right? So it's the coefficient
on the basis vectors. So it's going to be equal to
lambda 1, and then there's a bunch of zeroes. It's lambda 1 times v1 plus 0
times v2 plus 0 times v3, all the way to 0 times vn. That's what it's equal to. But it's also equal to D, and
we can write D like this. D is also a transformation
between Rn and Rn, just a different coordinate system. So D is going to just be a bunch
of column vectors d1, d2, all the way through dn
times-- this is the same thing as D times our B representation of the vector v1. But what is our B
representation of the vector v1? Well, the vector, v1 is just 1
times v1 plus 0 times v2 plus 0 times v3 all the way
to 0 times vn. v1 is a basis vector. That's just 1 times itself plus
0 times everything else. So this is what its
representation is in the B coordinate system. Now, what is this going
to be equal to? And we've seen this before. This is all a bit of review. I might even be boring you. This is just equal to 1 times
d1 plus 0 times d2 plus 0 times all the other columns. This is just equal to d1. So just like that, we have our
first column of our matrix D. We could just keep doing that. I'll do it multiple times. The transformation of v2 in our
new coordinate system with respect to our new basis is
going to be equal to-- well, we know what the transformation
of v2 is. It's 0 times v1 plus lambda 2
times v2 and then plus 0 times everything else. And that's the same thing as D,
d1, d2, all the way through dn times our B representation
of vector 2. Well, vector 2 is one of
the basis vectors. It's just 0 times v1 plus 1
times v2 plus 0 times v3 all the way, the rest is 0. So what's this going
to be equal to? This is 0 times d1 plus
1 times d2 and 0 times everything else, so
it's equal to d2. I think you get the
general idea. I'll do it one more time
just to really hammer the point home. The transformation of the nth
basis vector, which is also an eigenvector of our original
matrix A or of our transformation in standard
coordinates, in B coordinates, is going to be equal to what? Well, we wrote it
right up here. It's going to be a
bunch of zeroes. It's 0 times all of these guys
plus lambda n times vn. And this is going to be this guy
d1, d2, all the way to dn times the B representation of
the nth basis vector, which is just 0 times v1, 0 times v2
and 0 times all of them, except for 1 times vn. And so this is going to be equal
to 0 times d1 plus 0 times d2 plus 0 times all
of these guys all the way to 1 times dn. So that's going to
be equal to dn. And just like that, we know what
our transformation matrix is going to look like with
respect to this new basis, where this basis was defined or
it's made up of n linearly independent eigenvectors of
our original matrix A. So what does D look like? Our matrix D is going to look
like-- its first column is right there. We figured that one out. Lambda 1, and then we just
have a bunch of zeroes. Its second column
is right here. d2 is equal to this. It's 0, lambda 2, and then
a bunch of zeroes. And then this is in
general the case. The nth column is going to have
a zero everywhere except along the diagonal. It's going to be lambda n. It's going to be the eigenvalue
for the nth eigenvector. And so the diagonal is going to
look-- you're going to have lambda 3 all the way
down to lambda n. And our nth column is lambda n
with just a bunch of zeroes everywhere. So D, when we picked-- this
is a neat result. If A has n linearly independent
eigenvectors, and this isn't always the case,
but we can figure out that eigenvectors and say, hey, I can
take a collection of n of these that are linearly
independent, then those will be a basis for Rn. n linearly independent vectors
in Rn are a basis for Rn. But when you use that basis,
when you use the linearly independent eigenvectors of A
as a basis, we call this an eigenbasis. The transformation matrix with
respect to that eigenbasis, it becomes a very, very
nice matrix. This is super easy
to multiply. It's super easy to invert. It's super easy to take
the determinant of. We've seen it multiple times. It just has a ton of
neat properties. It's just a good basis
to be dealing with. So that's kind of the
big takeaway. In all of linear algebra, we did
all this stuff with spaces and vectors and all of that,
but in general, vectors are abstract representations
of real world things. You could represent a vector
as the stock returns or it could be a vector of weather
in a certain part of the country, and you can create
these spaces based on the number of dimensions
and all of that. And then you're going to
have transformations. Sometimes, like when we learn
about Markov chains, your transformations are essentially
what's the probability after one time
increment that something state will change to something else,
then you'll want to apply that matrix many, many, many, many
times to see what the stable state is for a lot of things. And I know I'm not explaining
any of this to you well, but I wanted to tell you that all of
linear algebra is really just a very general way to solve a
whole universe of problems. And what's useful about this is
you can have transformation matrices that define
these functions essentially on data sets. And what we've learned now is
that when you look at the eigenvectors and the
eigenvalues, you can change your bases so that you can solve
your problems in much simpler ways. And I know it's all very
abstract right now, but you now have the toolkit, and the
rest of your life, you have to figure out how to apply this
toolkit to specific problems in probability or statistics or
finance or modeling weather systems or who knows
what else.