If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content
Current time:0:00Total duration:13:09

Showing that an eigenbasis makes for good coordinate systems

Video transcript

I've talked a lot about the idea that eigenvectors could make for good basis or good basis vectors so let's explore that idea a little bit more let's say I have some transformation let's say to transformation from RN to RN and it can be represented by the matrix a so the transformation of X is equal to the N by n matrix a to n by n times X now let's say let's say that we have n linearly independent eigen vectors of a so let's say and this isn't always going to be the case but it can often be the case you know it's definitely possible let's say let's assume let's assume that a has n linearly independent eigen vectors eigen vector so I'm going to call them V 1 V 2 all the way through V n now and linearly independent vectors in RN will definitely can definitely be a basis for RN we've seen that multiple times and what I want to show you in this video is that this makes a particularly good basis for this transformation so let's explore that so the transformation of each of these vectors now write it over here the transformation of vector one is equal to a times vector one and since vector one is an eigenvector of a that's going to be equal to some Augen value lambda 1 times vector 1 we could do that for all of them the transformation of vector 2 is equal to a times V 2 which is equal to some I in value lambda 2 times V 2 and I'm just going to skip all of them and just go straight to the nth one is we have n of these eigenvectors you might have a lot more we're just assuming that a has at least n linearly independent eigenvectors in general you could take scaled up versions of these and they'll also be eigenvectors let's see so the of VN is going to be equal to a times VN and because these are all eigenvectors a a times VN is just going to be lambda and some eigenvalue times the vector V n now what are these also equal to well this is equal to and this is probably going to be unbelievably obvious to you but this is the same thing as lambda 1 times v1 plus 0 times v2 plus all the way to 0 times VN and this right here is going to be it's going to be 0 times v1 plus lambda 2 times v2 plus all the way 0 times all of the other vectors VN and then this guy down here this is going to be 0 times v1 plus 0 times v2 plus 0 times all of these basis vectors these eigen vectors but lambda n times VN I mean this is almost stunningly obvious right I just rewrote this as this plus a bunch of 0 vectors but the reason why I wrote that is in the second we're going to find we're going to take this as a basis so we're going to find coordinates with respect to that basis and so this guy's coordinates will be or your lambda 1 0 0 because that's the coefficients on our basis vectors so let's do that so let's say that we have we define this as some basis so B is equal to the set of actually I'm going to write it that way let's say I say that B I have some basis B that's equal to that what I want to show you is that when I do a change of basis so when I do a change of basis so we've seen this before in my standard coordinates or in coordinates with respect to the standard basis you give me some vector in RN I'm going to multiply it times a and you're going to how the transformation of it so it's going to be an RN now we know we can do a change of basis we know we could do a change of basis and in a change of basis if you want to go in that way you multiply by C inverse which is and remember the change of basis matrix C if you want to go in this direction you multiply by C the change of basis matrix is just a matrix with all of these vectors as columns if it's very easy to but if you change your basis from X to our new basis you multiply by the inverse of that we've seen that multiple times if they're all orthonormal then this is the same thing as a transpose we can't assume that though and so this is going to be X in our new basis and if we want to find some transformation if we want to find the transformation matrix for T with respect to our new basis it's going to be some some matrix D and if you multiply D times X you're going to get this guy but you're going to get the B representation of that guy the transformation of the vector X is B representation and if we want to go back and forth between that guy and that guy if we want to go in this direction you can multiply this times C and you'll just get the transformation of X and if you want to go in that direction you could multiply by the inverse of your change of basis matrix we've seen this multiple times already but what I've claimed or I've kind of hinted at is that if I have a basis that's defined by eigenvectors of a that this will be a very nice matrix so this lack be the debate this might be the coordinate system that you want to operate in especially if you're going to apply this matrix a lot if you're going to do this transformation on a lot of different things you're going to do it over and over and over again maybe to the same set then it maybe is worth the overhead to do the conversion and just use this as your as your coordinate system so let's let's see that this is actually this that this will actually be a nice-looking easy to compute with an actually diagonal matrix so we know that the transformation what is the transformation of let's write this in a bunch of different formats let me scroll down a little bit so if I wanted to write the transformation of v1 and B coordinates what would it be it's just going to be equal to well these are the basis vectors right so it's the coefficient on the basis vector so it's going to be it's going to be equal to lambda1 and then it's a bunch of zeros it's lambda one times v1 plus 0 times v2 plus 0 times v3 all the way to 0 times the N that's what it's equal to but it's also equal to to D and we can write D like this D is also transformation between RN and RN just a different coordinate system so D is going to just be a bunch of column vectors D 1 D 2 all the way through D n times this is the same thing as D times our B representation of the vector v1 but what is our B representation of the vector v1 well the vector v1 is just 1 times v1 plus 0 times v2 plus 0 times v3 all the way to all the way to V all the way to 0 times VN v 1 is a basis vector so it's just 1 times itself plus 0 times everything else so that's this is what its representation is in the B coordinate system now what is this going to be equal to and we've seen this before this is a little bit of review I might even be boring you this is just equal to 1 times d1 plus 0 times d2 plus 0 times all the other columns so this is just equal to d1 so we dislike that we have our first column of our matrix D we could just keep doing that now I'll do it multiple times the transformation the transformation of v2 in our new coordinate system orthrus pecked to our new basis is going to be equal to well we know what the transformation of v2 is it's 0 times v1 0 times v1 plus lambda 2 times v2 lambda 2 times V 2 and then plus 0 times everything else 0 times everything else and that's the same thing as d d1 d2 all the way through DN times our B representation of vector - well vector 2 is one of the basis vectors it's just 0 times v1 plus 1 times v2 plus 0 times v3 all the way the rest is 0 so what's this going to be equal to this is 0 times d2 sorry 0 times d1 plus 1 times D 2 and 0 times everything else it's equal to d2 I think you get the general idea I'll do it one more time as to really hammer the point home the transformation of the 10th basis vector which is also an eigen that I can vector of our original matrix a or of our transformation in standard coordinates in B coordinates is going to be equal to what well we wrote it right up here it's going to be a bunch of zeroes it's 0 times all of these guys plus lambda n times VN and this is going to be this guy D 1 D 2 all the way to DN times the B representation of the of the nth basis vector which is just 0 times V 1 0 times V 2 and 0 times all of them except for 1 times V N and so this is going to be equal 0 times D 2 plus 0 times 0 times D 1 plus 0 times D 2 plus 0 times all of these guys all the way to 1 times D n that's going to be equal to DN and just like that we have we know what are what are our transformation matrix is going to look like with respect to this new basis where this basis was defined or it's made up of a but of n linearly independent eigenvectors of our original matrix a so what does d look like d our matrix D is going to look like its first column is right there it's we figure that one out lambda 1 and then we just have a bunch of zeros its second column is right here D 2 is equal to this so it's 0 lambda 2 and then a bunch of zeros and then this is in general the case the nth column is going to have a 0 everywhere except along the diagonal is going to be lambda n it's going to be the eigen value for the nth eigen vector and so the diagonal is going to look and you're going to have lambda 3 all the way down to lambda N and our nth column is lambda n with just a bunch of zeros everywhere so D when we picked this is a neat result when you if if a has n linearly independent eigen vectors and this isn't always the case but you can figure out the eigen vectors and say hey I can take a collection of n of these that are linearly independent then those will be a basis for RN and linearly independent vectors in RN are a basis 4rn and but when you put when you use that basis when you use the I give you linearly independent eigenvectors of a as a basis we call this an eigenbasis eigen eigen basis the transformation matrix with respect to that eigenbasis it becomes a very very nice matrix this is super easy to multiply it's super easy to invert it's super easy to take the determinant of we've seen it multiple times it just has a ton of neat properties it's just a good basis to be dealing with so that's one of you know that's kind of the big takeaway you know in all of linear algebra we did all the stuff with spaces and vectors and all of that but in general vectors are abstract representations of real world I guess you know real world things you could represent a vectors you know the stock returns or you could be a vector of you know whether in a certain part of the country and you can create these spaces based on the number of dimensions and all of that and then you're gonna have transformations you know sometimes when you do about Markov chains your transformations are essentially you know what's the probability of you know after one time increment that's something you know something state will change to something else and then you'll want to apply that matrix many many many many times to see what you know kind of what the stable state is for a lot of things I know I've I'm not explaining any of this to you well but I wanted to tell you that all of linear algebra is really just a very general way to solve a whole universe of problems and what's useful about this is you can have transformation matrices that define these functions essentially on data sets and what we've learned now is that when you look at the eigenvectors and the eigenvalues you can change your basis so that you can solve your problems in much simpler ways and it's all very abstract right now and but you now have the toolkit and you know the rest of your life you kind of have to figure out how to apply this toolkit to specific problems in probability or statistics or finance or you know or modeling weather systems or who knows what else