If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Representing vectors in rn using subspace members

Showing that any member of Rn can be represented as a unique sum of a vector in subspace V and a vector in the orthogonal complement of V. Created by Sal Khan.

Want to join the conversation?

  • blobby green style avatar for user Ahmed Ali
    At Sal says that 'in a previous video' he showed that a subspace with dim n and n l.i vectors that are members of subspace impies the n vectors form a basis. WHere is this video?
    (6 votes)
    Default Khan Academy avatar avatar for user
    • spunky sam red style avatar for user Bernard Field
      The necessary information is likely in one of the videos on basis sets, in the Linear Algebra playlists. Whether he proved that exact result or not it another matter, but he covered enough information to infer it.

      If we take any n l.i. vectors, then we can define an n-dimensional subspace with those vectors as the basis. If those vectors are taken from a particular n-dimensional subspace, then any linear combinations of those vectors must be a member of the same subspace. This means the basis defined by those vectors is a basis for the subspace those vectors were chosen from. (By definition, any basis of an n-dimensional subspace must have n vectors)
      (0 votes)
  • leaf blue style avatar for user Tarun Akash
    Am i the only one who has to watch a video multiple times to actually get it?
    (5 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user InnocentRealist
    Re @ :

    In a much earlier video Sal showed that every basis for a subspace V has the same number of elements.

    This doesn't prove that if we have some n dimensional subspace (R^n), and n linearly independent vectors from that subspace, that they are a basis for R^n, does it (although I feel certain it's true)? I came up with the following proof (Has he made this unnecessary in an earlier video?):

    Suppose there are n linearly independent R^n vectors {v_i} i=1,n and they aren't a basis for V. Then there is a vector "u" in R^n and not in span({v_i}), which means that span(S) = span({v_i}U{u}) (the span of a set of n+1 l.i. vectors) is in Rn. But since rref of "A = [v1 v2 ... vn u] = [col1 col2 ... col(n+1)]" (an nx(n+1) matrix) has at least 1 free variable, S can't be a l.i. set. Contradiction.
    (3 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user InnocentRealist
    What's the dimension of the span of the 0 vector? If it's "one", then dim(R^n) + dim(0 vector) = n+1 (?). (Isn't the 0 vector in R^n orthogonal to any vector in R^n, and the orthogonal complement of R^n in R^n?)
    (But the R^n 0 vector is in both V and V^p, so then V^p isn't really a complement of V, because V and V^p intersect, or else it's not a vector space because it doesn't have the 0 vector).
    (1 vote)
    Default Khan Academy avatar avatar for user
    • spunky sam red style avatar for user Bernard Field
      dzxterity covered the orthogonality quite well. I'll look at the dimensionality of span(0).

      If we treat 0 as a basis vector, and take dimension as being simply the number of basis vectors, then we would get dim(span(0))=1, which as you observed contradicts the Rank-Nullity Theorem.
      This suggests that there must be a special case for the zero vector. Possibly dimension can be defined as the number of non-zero basis vectors. Possibly 0 doesn't count as a basis vector even if it is the only vector in a set (since if we look at the 0 vector on its own, it does not satisfy the condition for linear independence, since c 0 = 0 is true for any c, not just c=0).
      Logically, dim(span(0))=0. This satisfies common sense and the Rank Nullity Theorem.
      (3 votes)
  • blobby green style avatar for user camnharrington
    Would this imply that R^n is a direct sum of V and V^perp?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Rich Bell
    I believe the proof Sal made in the beginning to show that that the intersection of a subspace with its orthogonal complement is {0} to be incomplete. If you assume that x is in both V and Vperp, then you just imposed the condition that the dimension of V and Vperp are equal, otherwise the dot product is not defined. From this point the proof only shows that when V and Vperp are equal dimensions the intersection must be the set {0}. To show the more general proof, you should assume that x is in the intersection of V and Vperp. Then you can say x dot x = 0 no matter what dimensions V and Vperp are.
    (1 vote)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user Erwin
      If I understand correctly, V (of dimension k) and Vperp (of dimension n-k) are both defined with the Rn subspace, i.e., they both consist of n-dimensional vectors, and so there is no problem with finding their scalar product.

      Imagine we have a 3-dimensional space R3, and in which the V subspace is a plane within this 3d space, and the Vperp subspace is a line orthogonal to this plane. Both the plane and the line are defined 3 dimensionally even though V only has a "dimension" of 2 (it's a plane), and Vperp only a "dimension" of 1 (it's a line).
      (3 votes)
  • blobby green style avatar for user Pranav Chaturvedi
    At , Sal says that combination of basis vectors of V and V perp should be basis for Rn. What I am confused in is that there can be a combination of V and V perp basis vectors such that V is equal to negative one times V perp and where coefficients are not zero, and in that case the above statement that combination of basis vectors of both V and V perp will form a basis for Rn will not be true.

    Am I missing anything?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • mr pink red style avatar for user Tomé Silva
    I think that an axiom that would help people understand this proof is knowing that 2 vectors that are in fact orthogonal are by default linear independent. Meaning that you cannot write orthogonal vectors as a linear combinations of other orthogonal vector. The pivot vector are all orthogonal to each other.
    Which implies that the basis vectors from two orthogonal subsets are are all linear independent from each other. And can form a basis for IRn
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user harunp
    Could we use the following example:

    i and j are the unit basis vector.
    Both are a valid subspace of R2 (closure)
    They are orthogenal to each other.
    But as we know, {i,j} span and forms a basis for R2.

    Isnt this a summary of this video?
    (1 vote)
    Default Khan Academy avatar avatar for user
    • mr pink red style avatar for user Tomé Silva
      What I think the video also want to explore is that taking the null space is like saying: find me all possible vectors in IRn that are orthogonal to the rowspace, which by default is going after linear independent vectors of IRn that are not represented by a giving rowspace. Being the rowspace represented in the video as V and the null space as Vcomplement.
      (1 vote)
  • starky sapling style avatar for user 🏈 FOOTBALL GOD🏈
    why is it so log
    (1 vote)
    Default Khan Academy avatar avatar for user

Video transcript

Let's say I have some subspace V, that is a subset of Rn. And let's say that we also have its orthogonal complement, we write that as V perp. That'll also be a subset of Rn. A couple of videos ago, it might have even been the last video if I remember properly, we learned that the dimension of V, plus the dimension of the orthogonal complement of V, which is also another subspace, is going to be equal to n. Remember dimension is just the number of linearly independent vectors you need to have a basis for V. And the dimension here is the number of linearly independent vectors you need to have a basis for the orthogonal complement of V. Now given this, let's see if we can come up with some other interesting ways in which these two subspaces relate to each other. Or how they might relate to all of the vectors in Rn. So the first question is, do these two subspaces have anything in common? Are there any vectors that are in common with the two. And to test whether there is, let's just assume there is, and see what the properties of that vector would have to be. Let's assume right here that I have some vector x that is a member of my subspace V. Let's also assume that x is a member of the orthogonal complement of V. Now what does this second statement mean? Membership in the orthogonal complement means that x dot v, for any v that is a member of our subspace, is going to be equal to 0. Let me write it this way actually. x dot v is equal to 0 for any v that is a member of our subspace. That's what it means to be a member of V's orthogonal complement. Now we assume that x is also a member of V. So that means that we can stick x here as well. For any member of V. x is also a member of V. So that implies that x dot x is equal to 0. Another way to write that is that the length of x squared is equal to 0. Or the length of x is equal to 0. And that's only true for one vector. You can even try it out with the different constituents of x. The only vector that that's true for is the 0 vector. So x has to be equal to the 0 vector. That's the only vector in Rn that when you dot it with itself you get 0, or whose the square of its length is equal to 0. And we've shown that many, many, many videos ago. What this tells us is at that the intersection between V and V complement-- this kind of upside down U just means intersection, it just means where do these two sets overlap-- the only place that these overlap is with a subset of the 0 vector. So if I were to draw all of Rn like this. Let's say that this is Rn. And let's say I draw the subspace V. And let's say I draw the orthogonal complement to V. It's all of these vectors right here. This is the orthogonal complement to V right there. So this is V perp. These are all of the vectors that when I dot it with any vector here, I'm going to get equal to 0. So this is V perp. The intersection, their overlap, the only vector that is a member of both is the 0 vector. That's their only intersection. So that's fair enough. The only vector that's a member of a subspace and its orthogonal complement is the 0 vector. Nothing too profound there. Let's see if we can come up with some other interesting relations between the subspace and its orthogonal complement. Maybe some arbitrary vectors in Rn. So let's just write down-- well we know that the dimension of our subspace V is equal to k. If its equal to k, we know that its dimension plus its orthogonal complement has to be equal to n, because we're dealing in Rn. And we also know that the orthogonal complement of V is a subset of Rn, I drew it right here. The dimension of V is equal to k. That's a k right there. And what's the dimension of the orthogonal complement of V going to be? Well, when you add them together-- I wrote that up here-- they have to equal n. So this guy's going to have to be n minus k. If you have k here. This guy's dimension is k, this guy's dimension right here if, it's n minus k, that when you add these two up, k plus n minus k is going to be equal to n. So this guy will have a dimension of n minus k. Now what does dimension mean? It means that that's the number of linearly independent vectors you need to form a basis. I have k vectors as a basis for V. I have v1, v2, all the way to vk. And this is a basis for V, which just means they're all linearly independent. And they span V. Any member of V right here can be represented as a linear combination of these vectors. Now the dimension of the orthogonal complement of V is n minus k. So we could have n minus k vectors. Let's call them w1, w2, all the way to wn minus k. We have n minus k of these characters. And this set is a basis for the orthogonal complement of V. So any vector in here can be represented as a linear combination of these guys right here. And all of these guys are linearly independent. So you don't have any redundant vectors there. Now let's explore. And I'll tell you where I'm trying to go. I'm trying to see if I combine these two sets, whether I get a basis for all of Rn. That's what I'm trying to understand. Let's just say that for some constants c1 times v1 plus c2 times v2 plus all the way to ck times vk plus-- for the constants on these guys I'll use d-- plus d1 times w1 plus d2 times w2, all the way to plus dn minus k times the basis vector wn minus k. Let's say that I'm curious about setting this sum equal to 0, equaling the 0 vector for some scalers. The scalers are these c's and these d's. And we know that there's at least one solution set of scalers for which this is true. We could multiply all of these constants-- c1, c2, ck, d1, d2, all the way to dn minus k. They could all be 0. Or there might be more than one solution. In fact, if the only solution is that all of these constants have to be equal to 0, then we know that all of these vectors are linearly independent with respect to each other. And if they're all linearly independent with respect to each other, then we know that they can be a basis for Rn. But we don't know that yet. We don't know that the only solution to this is all of the constants being equal to 0. So let's see if we can experiment with this a little bit. If we take this equation, which I just wrote down, we know that one solution is all of the constants, the c's and d's equalling to 0, but we don't know that that's the only one. Let's just subtract all of the w vectors from both sides of this equation. So what are we going to get? We're going to get c1, v1 plus c2, v2 all the way to plus ck, vk. And we're going to subtract this from both sides of the equation. It's going to be equal to the 0 vector. Which is really just 0, I don't even have to write it down, but maybe I'll write it down there just so you understand. I'm just taking this equation, I'm subtracting these guys from both sides. So 0 vector minus d1, w1 plus d2, w2 plus all the way to dn minus k, wn minus k. All I did is I subtract these terms right here from both sides of this equation. I don't even have to write this is 0 here, that's a bit redundant. So what I have here is I have some combination of the basis vector of V. So if I look at this, this is some linear combination of the basis vectors in V. If I call this a vector-- let me call this some vector x. Let's say x is equal to c1, v1 plus c2, v2 all the way to ck, vk. We know that it's a linear combination of our basis vectors of V, so x is a member of V. By definition, any linear combination of the basis vectors for a subspace is going to be a member of that subspace. Well, similarly, what do we have on the right-hand side of this equation? On the right-hand side of this equation, I have some linear combination of the basis vectors of V complement You could put just put a minus all along that, but that won't change the fact that this is some linear combination of V complement's basis vectors. So this vector over here is going to be a member of-- we could also call this x. So x is equal to this, but its also going to be equal to this, and since it can be represented as a linear combination of the orthogonal complement of V's basis vector, or V perp's basis vector, we know that this also has to be a member of V perp. Let me just review this, because it can be a little bit confusing. I just set up this equation right here. We know that there's at least one solution-- all of the constants equalling 0. Anyone could do this. Now I subtracted all of the yellow terms from both sides, and I got this equality. The left=hand side of this equality is linear combinations of the basis vectors of V. So any linear combinations of the basis vectors of V is going to be a member of V. That's the definition of basis vectors. So if I set x equalling to this letf-hand side, I can say that x is a member of V. Well, if x is equal to the left-hand side, it's also equal to the right-hand side. The right-hand side is some linear combination of V perps, or the orthogonal complement of V's basis vectors. Which tells us that x is also a member of V perp. Well, what does that mean? That means that x must be equal to 0. I just showed you at the beginning of the video, the only vector that's a member of a subspace and its complement is the 0 vector. So we know that because these are orthogonal complements, we know that x must be equal to 0. So just to reiterate, we know 0's has to equal both of these sides of the equation. And these are the same constants that we had to begin with. But what do we know about these two sets? We know that the 0 vector has to be equal to this. That's the only vector in Rn that's a member both of V and of the orthogonal complement of V. Now, this is a 0 vector and we have this linear combination of V's being set equaling to the 0 vector. What do we know about these constants? What does c1, c2, all the way to ck have to be? We know that v1 through vk is a basis for V. That tells us that they span V and that they are linearly independent. Linear independence by definition means that the only solution to this equation right here is that all of the constants have to be 0. So linear independence tells us that c1, c2, all the way through ck must be 0. All of these guys right here are 0. Which is the same as all of these guys. All of these guys must be 0. Now let's look at the right-hand side of this equation. We could put the minus all the way, but the same argument holds. This linear combination of V perp's basis vectors is equal to 0. The only solution to this-- because each of these w1's, w2's, and wn minus k's are linearly independent-- being equal to 0 is all of the constants have to be equal to 0. That falls out of linear independence. If this negative is confusing you a bit, if it makes it look different than that, you could just multiply this negative out and say minus d1 would have to be equal to 0, minus d2 would have to be 0, minus d and minus k would have to be 0. But it's the exact same argument. Linear independence, which falls out of the fact that this is a basis set, implies that the only solution to this being equal to 0 is each of the constants being equal to 0. Well, that means that d1, d2, all the way to dn minus k must be 0. Let's go back to what I wrote up here. This was the original equation that we were experimenting with. Just manipulating this equation a bit and understanding that the only intersection between V and V perp is a 0 vector. And that you only have linear independence if the only solution to these vectors equalling 0 is all of their constants equalling 0. Then we know that all of these terms, c1 through ck, d1 through dn minus k, they all have to be equal to 0. That's the only solution to this larger equation that I wrote up here. Well, the only solution to this large equation that I wrote up here is that all of the constants are equal to 0. That implies that if I were it take the set right here of v1, v2, all the way to vk, and I were to augment that with the basis vectors of V perp, which are w1, w2, all the way to wn minus k, that this is a linearly independent set. And I know that because the only solution to this equation is each of these constants having to be equal to 0. That's what linear independence means. This implies this. Linear independence implies that. We used the fact that linear independence implies that all of these equal 0 to get the fact that c1 all the way to ck was equal to 0. And then we use it again when we set this thing also being equal to the 0 vector. We knew that all of the d's had to be equal to 0. I don't know if you remember, the 0 vector came out from the fact that that was the only vector that is a member of both sets. I know I'm being a little bit repetitive, but I really want you to understand that this proof isn't some type of circular proof. That we just wrote this equation, we wondered about what the solution set is to it, we rearranged it, we said hey both sides of this equation are members of both V and V perp. The only vector that's a member of both is the 0 vector. So both of these sides of the equation have to be equal to 0. The only solution to that is all of these constants being equal to 0, because each of these are linearly independent sets. So therefore all of these constants have to be equal to 0. And then this augmented set, where if you combined all of the basis vectors, that is going to be linearly independent. Now many, many, many, many, many videos ago, we learned that if we have some subspace with dimension n, and we have n linearly independent vectors that are members of your subspace, then those n linearly independent vectors, or the set of your n vectors, is a basis for the subspace. Now Rn is a subspace of itself. Rn is an n dimensional subspace. We could write the dimension of Rn is equal to n. Now we have n linearly independent vectors in Rn. So that tells us that these guys right here are a basis for Rn. We have n linearly independent vectors. We have n minus k that are coming from V perp. We have n that are coming from V from their basis for those subspace. So now we have a total of n vectors. They're linearly independent. They're all members of Rn. So they are a basis for Rn. Which tells us that any vector in Rn can be represented by linear combinations of these guys, which is fascinating. So this is a basis for Rn. So that tells us that we can take any vector-- let's say a is a member of Rn, some vector. That means since this is a basis for Rn that a can be represented to some linear combination of all of these guys. So it can be represented as C1 times V1 plus C2 times V2 all the way to plus Ck times with Vk. Let me use a different letter just to make sure that you understand that this is a different equation that I'm writing than I wrote earlier in the video. So I can write this, and then I can have some other constants that say plus e1 times our V perp basis vector 1 plus e2 times this guy plus all the way to en minus k times the n minus k basis vector for V perp. I can represent any vector in Rn this way. Or another way to say it. What is this? This is some vector that is a member of our subspace V. And then this is some vector over here that is a member of the orthogonal complement of V. This is just a linear combination of V perp's basis vectors. This is just a linear combination of V's basis vectors. So given that all of these characters are a basis for Rn tells us that any member of Rn can be represented as a linear combination of them. But that means that any member of Rn can be represented as a sum of a member of our subspace V plus a member of your subspace V perp. This is a member of V, and this is a member of V perp. And that's a really, really interesting idea. You give me a subspace and then we can figure out its orthogonal complement. Any other vector in Rn can be represented as a combination, or sum, of some vector in our subspace and some vector in its orthogonal complement. Now the next question you might be asking, is this representation unique? So is this unique? Well let's test it out by assuming it's not unique. So that means that I have for some vector a that is a member of Rn, I can represent it two ways. I can represent it as equalling some member of my subspace V, plus some member of the orthogonal complement of V. I can represent it that way. Or I could represent it as some other member of my subspace V plus some other member of my orthogonal complement. So x1, x2 are members of V perp. And then v1 and v2 are members of V. If we assume it's not unique, there's two ways that I could do this. And I'm representing it as these two vectors. Now clearly this side of this equation is equal to that. These are both representations of a. So we can rearrange this a little bit. We could say that v1 minus v2-- if I subtract v2 from both sides, I get v1 minus v2 is equal to-- that's subtracting v2 from both sides, and if I subtract x1 from both sides-- x2 minus x1. These are both members of the subspace V. And any subspace is closed under addition and subtraction, which is really a special case of addition. The vector v1, let me write it this way, let me call my sum vector z, being equal to both of these guys which are equal to each other. z is the vector V1 minus V2. Any subspace is closed under addition, if you take two vectors and find their difference in a subspace, then that resulting difference is also going to be in a subspace. z is going to be a member of our subspace V. This vector right here-- which is also the same thing we just said that to be equal our vector z-- is going to be a member of our V perp. Why? Because both x1 and x2 are members of the subspace V's orthogonal complement. And that is a subspace as well. So it is closed under addition and subtraction. So this is also going to be a member of your subspace. So we could also say that z is a member of V perp or the orthogonal complement of V. Well, we've done this multiple times. This was the first thing we showed in the video. The only vector that's a member of a subspace and its orthogonal complement is the 0 vector. So z has to be equal to the 0 vector. So this is equal to the 0 vector. Well, if both of these are equal to the 0 vector, we know that v1 minus v2 is equal to the 0 vector, which implies that v1 must be equal to v2. And we also know that x2 minus x1 is equal to the 0 vector. Or x2 is equal to x1. So we try to say that, hey, there's two ways to construct some arbitrary vector a that's in Rn. And we wrote that down. But then we found out that no, v1 must be equal to v2 and x1 must be equal x2. So there's only a unique way to write any member of Rn as a sum of a vector that's in our subspace V and a vector that is in the orthogonal complement of V.