If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content
Current time:0:00Total duration:27:00

Representing vectors in rn using subspace members

Video transcript

let's say I have some subspace V that is a subset of RN that is a subset of RN and let's say we also have its orthogonal complement so we write that as V perp that'll also be a subset of RN now a couple of videos ago it might have even been the last video if I remember properly we learned that the dimension the dimension of V plus the dimension of our orthogonal complement of V which is also another subspace is going to be equal to N and remember dimension is just the number of vectors you need to have the number of linearly independent vectors you need to have a basis for V and the dimension here is the number of linearly independent vectors you need to have a basis for the orthogonal complement of V now given this let's see if we can come up with some other interesting ways in which these two subspaces relate to each other or how they might relate to all of the vectors in RN so the first question is do these guys have n do these two subspaces have anything in common are there any vectors that are in common with two and to test the the whether there is let's just assume there is and see what the properties of that vector would have to be let's assume let's assume right here that I have some vector X but I have some vector X that is a member of my subspace V let's also assume that X is a member of the orthogonal complement of V so X is a member of the orthogonal complement of V now what does the second statement mean membership in the orthogonal complement means it meet well let me write it this way this means that X dot V for any or any V that is a member of our subspace is going to be equal to 0 let me write it this way actually XV is equal to 0 for any or any V that is a member of our subspace that's what it means to be a member of these orthogonal complement now now we assume that X is also a member of V so that means that we can stick X here as well right for any member of V X is also a member of V so that implies that implies that X dot X is equal to zero or another way to write that is that the length of X the length of x squared is equal to zero or the length of X is equal to zero and that's only true for one vector you can even try it out with the different constituents of X that's going to the only vector that that's true for is the zero vector so X has to be equal to zero vector that's the only vector in RN that when you dot it with itself you get zero or who's the square of its length is equal to zero and we've shown that many many many videos ago so what this tells us is that the intersection the intersection between V and V complement this kind of upside-down u just means intersection it just means where do these two sets overlap the only place that these overlap is with the subset of the zero vector so if I were to draw all of RN like this so if I were to draw all of RN let's say that this is RN let's say that's RN and let's say I draw the subspace V the subspace V let's say it looks like this and let's say I draw this this is V my subspace B and let's say I draw the orthogonal complement to V let's say it's all of these vectors right here let's say this is the orthogonal complement to V right there so this is V perp these are all of the vectors for which if I take any vector or these are all of the vectors that when I dot it with any vector here I'm going to get equal to zero so this is V perp and the intersection they're overlap the only vector that is a member of both the only vector that is a member of both right there is the zero vector that's their only intersection so that's fair enough the only vector that's a member of a subspace and it's orthogonal complement is the zero vector nothing too profound there let's see if we can come up with some other interesting relations between the subspace and it's orthogonal maybe some arbitrary vectors in RN so let's just let's just write down well we know that the dimension let's say the dimension of the dimension of our subspace V is equal to K let's say it's equal to K if it's equal to K we know that it's dimension plus this orthogonal complement has to be equal to n because we're dealing in RN let me write this V is a subset of RN and we also know that the orthogonal complement of V is a subset of RN I drew it right here the dimension of V is equal to K that's a K right there then what's the dimension what's the dimension what's the dimension of the orthogonal complement of V going to be well when you add them together I wrote that up here they have to equal n so this guy's going to have to be n minus K right if you have K here this guy's dimension is K this guy's dimension right here if it's n minus K then when you add these two up k plus n minus K is going to be equal to n so this guy will have a dimension of n minus K now what does dimension means it means that that's the number of linearly independent vectors you need to form a basis so let's say that I have K vectors as a basis for V so let's say I have v1 v2 all the way to VK and this is and this is a basis basis for V which just means they're all linearly independent and they span V any member of V right here can be represented as a linear combination of these vectors there enough now the bate the dimension of the the orthogonal complement of V is n minus K so we could have n minus K vectors let's call them let's call them W 1 W 2 all the way to W n minus K we have n minus K of these characters and this is a this is a this set is a basis this is a basis for the orthogonal complement of V so any vector in here can be represented as a linear combination of these guys right here and all of these guys are linearly independent so you don't have any redundant vectors there if you will now let's explore let's explore now I'll tell you where I'm trying to go I'm trying to see if I combine these two sets whether I get a basis for all of our n that's where I'm trying what I'm trying to understand so let's just let's just look at let's just say that let's say that for some for some constants C 1 and let me write it this way so for some constants c1 times v1 plus c2 times v2 plus all the way to CK times VK plus plus for the constants on these guys I'll use D plus d1 times w1 plus d2 times w2 all the way to plus D n minus K times the vector the basis vector W n minus K let's say that I'm curious about setting this sum equal to 0 equaling the equaling of the zero vector for some scalars right the scalars are the C's and these DS for some scalars and we know that there's at least one solution set of scalars for which is this true we could multiply you know all of these constants C 1 C 2 CK D 1 D 2 all the way to DN minus K they could all be 0 or there might be more than one solution in fact if they if the only solution if the only solution is that all of these constants have to be equal 0 then we know that all of these vectors are linearly independent with respect to each other and if they're all linearly independent with respect to each other then we know well I'll touch on that at the end of the video we know that they'll be they could be a basis for RN but we don't know that yet we don't know that the only solution to this is all of the constants being equal to 0 so let's see if we can experiment with this a little bit if we take this equation which I just wrote down we know that one solution is all of the constants the C these equal to zero but we don't know that that's the only one let's just subtract all of the W vectors from both sides of this equation so what are we going to get we're going to get c1 v1 plus c2 v2 all the way to plus CK VK we're going to subtract this from both sides of the equation it's going to be equal to the zero vector which is really just zero I don't even have to write it down but maybe I'll write it down there just to see you understand I'm just taking this this equation I'm subtracting these guys from both sides so zero vector minus d1 w1 plus d2 w2 plus all the way to D n minus K W n minus K this is and I could just Dahl I did as I subtracted these terms right here from both sides of this equation and I could I don't even have to write this a zero here that's a bit redundant so what I have here is I have some combination of the basis vectors of V right so if I want to if I if I look at this this is some linear combination of the basis vectors and B and V so this if I call this a vector let me call this let me call this some vector X let's say X is equal to c1 v1 plus c2 v2 all the way to CK all the way to CK VK we know that it's a linear combination of our basis vectors of V so X is a member X is a member of V by definition any linear combination of the basis vectors for subspace is going to be a member of that subspace well similarly what do we have on the right-hand side of this equation on the right-hand side of this equation I have I have some linear combination of the basis vectors of V complement right so this right here this right here and you could put just put a minus all along that but that won't change the fact that this is some linear combination this is some linear combination some linear combination of of V complements basis vectors basis vectors so this vector over here is going to be a member of let me call this this could be some other vector what we're going to be equal to each other so we could also call this X so X is equal to this but it's also going to be equal to this and since it can be represented as a linear combination of the compla or the orthogonal complement of these basis vectors or V perp basis vectors we know that this also has to be a member of V perp let me just review this because it can be a little bit confusing I just set up this equation right here we know that there's at least one solution all of the constants equaling 0 anyone could do this now I subtracted all of the yellow terms from both sides and I got this equality the left-hand side of this equality is linear combinations of the basis vectors of V so any linear combinations of the basis vectors of B is going to be of the basis vectors of V is going to be a member of V that's the definition of basis vectors so if I set X equaling to this left-hand side so if I say X is equal to this left-hand side I can say that X is a member V well if X is equal to the left-hand side it's also equal to the right-hand side for the right-hand side is some linear combination of V perps or the orthogonal complement of these basis vectors which tells us that X is also a member of V perp but what does that mean that means that means that X must be equal to 0 I just showed you the beginning of the video the only vector that's a member of a subspace and its complement is the 0 vector so we know that because these are orthogonal complements or at least be purposely orthogonal complement of V we know that X must be equal to 0 we know that X must be equal to 0 so if X is equal to 0 let's write this down here X is equal to 0 so we just we just rearrange so we know zeros has to equal these both of these sides of the equation and these are the constants that we had to begin with but what do we know about these two sets let me let me write this a little bit neater let me erase this so that I can just write the zero vector right there so we know that the zero vector has to be equal to this that's the only that's the only vector in RN that sets a member both of V and of the orthogonal complement of V now this is a zero vector and we have this linear combination of V being equal being set equaling to the zero vector now what do we know what these cut about these constants what does C 1 C 2 all the way to CK have to be we know we know that v1 through v4 V that tells us that they span V and that they are linearly independent linearly independent linear independence by definition means that the only solution to this equation right here the only solution to this equation right here is that all of the constants have to be 0 so the fet linear independence tells us that C 1 C 2 all the way through C K must be 0 must be 0 so all of these guys right here are 0 which is the same as all of these guys all of these guys must be 0 now let's look at the right-hand side of this equation we could put the minus all the way but the same argument holds this linear combination of V perps basis vectors is equal to 0 the only solution to this because because each of these W ones W 2's and W and minus KS are linearly independent the only solution to this being equal to 0 is all of the constants have to be equal to 0 that falls out of linearly linear independence you could just multiply this if this negative is confusing a bit if it makes it look different than that you could just multiply this negative out and say oh well those would be other content B minus d1 would have to be equal to 0 minus d2 would have to be 0 minus DN minus K would have to be 0 but it's the exact same argument linear independence which falls out of the fact that this is a basis set applies it the only solution to this being equal to zero is each of the constants being equal to zero well that means that D 1 D 2 all the way to D n minus K must be 0 now let's go back to what I wrote up here this was the original equation that we were experimenting with just manipulating this equation a bit and understanding that the only intersection between V and V perp is a zero vector and that linear independence means that the only you only have linear independence if the only solution to this is equal to the only solution to these vectors equal to 0 is all of their constants equal to 0 then we know that all of these terms c1 through CK D 1 through D n minus K they all have to be equal to 0 that's the only solution to this larger equation that I wrote up here well the only solution to this large equation that I wrote up here is that all of the constants are equal to 0 that implies all of this implies that if I were to take the set right here of v1 v2 all the way to V K and I were to augment that with the basis vectors of V perp which are w1 1 do that in a different color W 1 W 2 all the way to W n minus K that this is a linearly independent set linearly independent set and I know that because the only solution to this equation is each of these constants having to be equal to 0 that's what linear independence means this implies this linear independence linear independence implies that we used the fact that linear independence implies that all of these equals 0 to get the fact that c1 was all the way the CK was equal to 0 we've got this right here and then we use it again when we set this thing also being equal to the 0 vector we knew that all of the DS had to be equal to 0 I don't know if you remember the 0 vector came out from the fact that that was the only vector that is a member of both sets and I'm being a little bit repetitive but I really want you all to understand that this proof isn't some type of you know circular proof that we just wrote this equation we wondered about what the solution set is to it we rearranged it we said hey both sides of this equation our members are members of both V and V perp the only the only vector that's a member of both is the zero vector so both of these sides of the equation have to be equal to zero the only solution to that is eat all of these constants being equal to zero because each of these are linearly independent sets so therefore all of these constants have to be equal to zero and then this augmented set where if you combined all of the basis vectors that that is going to be linearly independent now many many many many many videos ago we learned that if we have some subspace some subspace space with dimension with dimension n and we have and we have n vectors that are n let me write it this way and linearly independent vectors that are members of your subspace that are members of your subspace then those n linearly independent vectors than those n vectors or the set of your n vectors let me write that a little bit those the set of those n vectors is a basis for the subspace for the subspace now RN is a subspace RN is a subspace of itself RN is a n-dimensional subspace and dimensional subspace or it is an n-dimensional subspace we could write the dimension of RN is equal to n now we have n linearly independent vectors in RN so that tells us that these guys right here are a basis they are a basis for RN or RN we have n linearly independent vectors right we have n minus K that are coming from V perp and we have n that are coming from V from their basis or from their the basis for those subspaces so now we have a total of n vectors they're linearly independent they're all members of RN so they are a basis for RN which tells us that any vector in RN can be represented by linear combinations of these guys which is fascinating so this tells us so this is a basis for RN so that tells us that we can take any vector let's say let's say a is a member of RN some vector that means since this is a basis for RN that a can be represented to some linear combination of all of these guys so it can be represented as you know c1 times v1 plus c2 times v2 plus all the way to plus CK times VK and then let me use a different letter just to make sure that you understand that this is a different equation that I'm writing that I wrote earlier in the video so I could write this and then I can have some other constants let's say plus I don't know e 1 times our V perp basis vector 1 plus e 2 times this guy plus all the way to e and minus K times the N minus K basis vector for B perp I can represent any vector in RN this way or another way to say it what is this what is this right here this is some vector this is some vector that is a member of our subspace V and then this is some vector over here let me write it as X that is a member of the orthogonal complement of V right this is just a linear combination of the perps basis vectors this is just a linear combination of these basis vectors so given that all of these characters given that all of these characters are our basis for RN tells us that any member of RN can be represented linear combination of them but that means that any any member of RN can be represented as the sum of a member of our subspace V plus a member of your subspace V perp this is a member of V and this is a member of V perp and that's a really really interesting idea you give me a subspace you give me a subspace u and then we can figure out its orthogonal complement any other vector any other vector in RN in RN can be represented as a combination or sum of some vector in our subspace and some vector in its orthogonal complement now the next question you might be asking is this representation unique so is this unique is that is this unique right here this representation well let's test it out let's test it out by assuming it's not unique let's assume that it's not unique assume not unique so that means that I have so for some vector a that is a member of RN I could represent it two ways I could represent it as equalling some member of my subspace V plus some member of of the orthogonal complement of V I could represent it that way or I could represent it as some other member of my subspace V plus some other member of my orthogonal complement the X is so x1 x2 are members of V perp and then v1 and v2 are members of V so if we assume it's not unique there's two ways that I could do this and I'm representing it as these two vectors now clearly this side of this equation is equal to that these are both representations of a so we can rearrange this a little bit we could say that v1 minus v2 if I subtract V 2 from both sides I get v1 minus v2 is equal to is equal to if that's subtracting v2 from both sides and if I subtract X 1 from both sides it's equal to x2 - - x1 right now if I subtract these are both these are both members of the subspace V and any subspace is closed under addition which is an subtraction which is really just you know call most a special case of addition so if this is a member so v1 you know this this vector right here the vector v1 let me write it this way let me call my some vector Z being equal to both of these guys which are equal to each other Z is the vector v1 minus v2 any subspace is closed under addition so if you take the difference of two vectors in the Subspace it's if you take two vectors you find their difference in a subspace then that the resulting difference is also going to be in the subspace so this so Z is going to be a member of our subspace V and then this vector right here which is also the same thing we just set that to be equal to our vector Z this thing right here is going to be a member of week of our V perp Y because both X's both x1 and x2 are members of of the subspace V's orthogonal complement and that is a subspace as well this is a subspace so it is closed under addition and subtraction so this it is also going to be a member of your subspace so we could also say that Z is a member of the perp or the orthogonal complement of V well we've done this multiple times this was the first thing we showed in the video the only vector that is a member of a subspace and it's orthogonal complement is the zero vector so Z has to be equal to the zero vector so this is equal to the zero vector well if these both of these are equal to the zero vector we know that v1 minus v2 is equal to the zero vector which implies that v1 must be equal to v2 and we also know that x2 minus x1 is equal to the zero vector or x2 is equal to x1 so we stride to say that hey there's two ways to construct some arbitrary vector a that's an RN and we set it that equal and we wrote that down but then we found out that no v1 must be equal to V 2 and X 1 must be equal to X 2 so there's only a unique way to write any member of RN as a sum of a vector that's in our subspace V and a vector that is in the orthogonal complement of V