For any transformation that maps
from Rn to Rn, we've done it implicitly, but it's been
interesting for us to find the vectors that essentially just
get scaled up by the transformations. So the vectors that have the
form-- the transformation of my vector is just equal
to some scaled-up version of a vector. And if this doesn't look
familiar, I can jog your memory a little bit. When we were looking for
basis vectors for the transformation--
let me draw it. This was from R2 to R2. So let me draw R2 right here. And let's say I had the
vector v1 was equal to the vector 1, 2. And we had the lines spanned
by that vector. We did this problem several
videos ago. And I had the transformation
that flipped across this line. So if we call that line l, T was
the transformation from R2 to R2 that flipped vectors
across this line. So it flipped vectors
across l. So if you remember that
transformation, if I had some random vector that looked like
that, let's say that's x, that's vector x, then the
transformation of x looks something like this. It's just flipped across
that line. That was the transformation
of x. And if you remember that video,
we were looking for a change of basis that would allow
us to at least figure out the matrix for the
transformation, at least in an alternate basis. And then we could figure
out the matrix for the transformation in the
standard basis. And the basis we picked were
basis vectors that didn't get changed much by the
transformation, or ones that only got scaled by the
transformation. For example, when I took the
transformation of v1, it just equaled v1. Or we could say that the
transformation of v1 just equaled 1 times v1. So if you just follow this
little format that I set up here, lambda, in this
case, would be 1. And of course, the vector
in this case is v1. The transformation just
scaled up v1 by 1. In that same problem, we had
the other vector that we also looked at. It was the vector minus-- let's
say it's the vector v2, which is-- let's say
it's 2, minus 1. And then if you take the
transformation of it, since it was orthogonal to the
line, it just got flipped over like that. And that was a pretty
interesting vector force as well, because the transformation
of v2 in this situation is equal to what? Just minus v2. It's equal to minus v2. Or you could say that the
transformation of v2 is equal to minus 1 times v2. And these were interesting
vectors for us because when we defined a new basis with these
guys as the basis vector, it was very easy to figure out
our transformation matrix. And actually, that basis was
very easy to compute with. And we'll explore that a little
bit more in the future. But hopefully you realize that
these are interesting vectors. There was also the cases where
we had the planes spanned by some vectors. And then we had another vector
that was popping out of the plane like that. And we were transforming things
by taking the mirror image across this and we're
like, well in that transformation, these red
vectors don't change at all and this guy gets
flipped over. So maybe those would make
for good bases. Or those would make for
good basis vectors. And they did. So in general, we're always
interested with the vectors that just get scaled up
by a transformation. It's not going to be
all vectors, right? This vector that I drew here,
this vector x, it doesn't just get scaled up, it actually gets
changed, this direction gets changed. The vectors that get scaled up
might switch direct-- might go from this direction to that
direction, or maybe they go from that. Maybe that's x and then the
transformation of x might be a scaled up version of x. Maybe it's that. The actual, I guess, line that
they span will not change. And so that's what we're going
to concern ourselves with. These have a special name. And they have a special name and
I want to make this very clear because they're useful. It's not just some mathematical
game we're playing, although sometimes
we do fall into that trap. But they're actually useful. They're useful for defining
bases because in those bases it's easier to find
transformation matrices. They're more natural coordinate
systems. And oftentimes, the transformation
matrices in those bases are easier to compute with. And so these have
special names. Any vector that satisfies this
right here is called an eigenvector for the
transformation T. And the lambda, the multiple
that it becomes-- this is the eigenvalue associated with
that eigenvector. So in the example I just gave
where the transformation is flipping around this line,
v1, the vector 1, 2 is an eigenvector of our
transformation. So 1, 2 is an eigenvector. And it's corresponding
eigenvalue is 1. This guy is also an
eigenvector-- the vector 2, minus 1. He's also an eigenvector. A very fancy word, but all it
means is a vector that's just scaled up by a transformation. It doesn't get changed in any
more meaningful way than just the scaling factor. And it's corresponding
eigenvalue is minus 1. If this transformation--
I don't know what its transformation matrix is. I forgot what it was. We actually figured it
out a while ago. If this transformation matrix
can be represented as a matrix vector product-- and it should
be; it's a linear transformation-- then any
v that satisfies the transformation of-- I'll say
transformation of v is equal to lambda v, which also would
be-- you know, the transformation of [? v ?] would just be A times v. These are also called
eigenvectors of A, because A is just really the matrix
representation of the transformation. So in this case, this would be
an eigenvector of A, and this would be the eigenvalue
associated with the eigenvector. So if you give me a matrix that
represents some linear transformation. You can also figure
these things out. Now the next video we're
actually going to figure out a way to figure these
things out. But what I want you to
appreciate in this video is that it's easy to say,
oh, the vectors that don't get changed much. But I want you to understand
what that means. It literally just gets scaled up
or maybe they get reversed. Their direction or the
lines they span fundamentally don't change. And the reason why they're
interesting for us is, well, one of the reasons why they're
interesting for us is that they make for interesting basis
vectors-- basis vectors whose transformation matrices
are maybe computationally more simpler, or ones that make for
better coordinate systems.