If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Null space and column space basis

Figuring out the null space and a basis of a column space for a matrix. Created by Sal Khan.

Want to join the conversation?

  • blobby green style avatar for user chroni2000
    so basically finding the null space is an easy way to see if the set is linearly dependent or independent?
    (57 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user vincismurf
    Am I right to interprets that any Free variables found in the rref are clues that the column vectors are redundant to the span?
    (53 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Josh Hedgepeth
    x3 and x4 were free; v3 and v4 were linear combinations of v1 and v2. Will this always be the case? If its not how do you solve for the basis? this video solved for a very simple case only.
    (13 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user AdithyaC.Ganesh
      It is because x_3 and x_4 are free that v_3 and v_4 are linear combinations of v_1 and v_2.

      Think about it this way: since x_3 and x_4 are free, we can configure them however we want. So, we can just "contrive" x_3 and x_4 so that we express v_3 and v_4 in terms of v_1 and v_2!
      (31 votes)
  • leaf green style avatar for user kuhanmuniam
    "They do span the column space of A but they're not the basis" () what does that mean?
    (9 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user Cody Tyler
      In order to be a basis, the vectors must all be linearly independent. As he proves later in the video, v_3 and v_4 are linear combinations of v_1 and v_2, meaning the vectors are not linearly independent. Therefore they cannot be a basis. However, linearly dependent vectors can span. In that case, one or more of the vectors tells you no new information and the vectors can still span without those dependent vectors.
      (9 votes)
  • leaf blue style avatar for user George G
    Wait. But my understanding is that a "basis" would imply a rref = to the Identity matrix, and such a matrix is an square matrix (nxn). But those two vectors [123] and [114] do not conform an square matrix!
    (6 votes)
    Default Khan Academy avatar avatar for user
    • mr pants teal style avatar for user Wrath Of Academy
      With 4 column vectors you could span an R^4 space. However the 4 column vectors in A are each in R^3, so you immediately know they can't span an R^4 space, meaning at least 1 must be removed to get linear independence. It turns out though, when he does that RREF, that 2 of the column vectors are dependent. He could have picked any 2 to keep and get a basis, but the RREF style just picks the first two.

      Since there are only 2 vectors in this basis, it means the span forms a plane. Ie, all 4 of the original vectors lie in the same plane. It's a rather slanted awkward-to-visualize plane.
      (8 votes)
  • duskpin ultimate style avatar for user Lotte
    There isn't a specific time in this video when I thought of this question. Just overall, after watching it. I just realized this. The columns of the original matrix which correspond to the columns containing the pivot entries in the rref turn out to be the basis vectors. The others were all redundant. Is this true in every case?
    (6 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user oriondierking
    At around in the video, is Sal essential combining the operations of -1R2 and R2 + 2R1? That confused me at first...
    (8 votes)
    Default Khan Academy avatar avatar for user
  • mr pink red style avatar for user Ben
    Why is it wrong to say that any of the two vectors which make up the matrix can be a basis for the space?

    Surely the vectors [1,1,4] and [1,4,1] are linearly independent in the same way as the vectors [1,2,3] [1,1,4]?
    (7 votes)
    Default Khan Academy avatar avatar for user
    • leaf green style avatar for user Bowen
      [1,1,4] and [1,4,1] are linearly independent and they span the column space, therefore they form a valid basis for the column space. [1,2,3] and [1,1,4] are chosen in this video because they happen to be the first two columns of matrix A. The order of the column vectors can be rearranged without creating much harm here.
      (4 votes)
  • blobby green style avatar for user Jiaying He
    in the last one minute.
    is that mean C(A)= basis?
    thx
    (3 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user s.moerschbacher
      C(A) represents the column space of the matrix A. That is simply: C(A) = Span(column vectors of A). You list the columns separately inside the parentheses.

      Now, in order for a set to be a basis it not only has to span the set (every possible vector in the set can be represented by a linear combination of the vectors), but must also be linearly independent. Linear independence means there are no "extra" vectors present - the only way a linearly independent set can be written as the zero vector is if all the coefficients are zero. Two of the vectors in C(A) were linear combinations of other vectors in C(A). Thus, they were extra baggage we don't need - no new information is gained by having those vectors present.

      You're missing the point by saying the column space of A is the basis. A column space of A has associated with it a basis - it's not a basis itself (it might be if the null space contains only the zero vector, but that's for a later video). It's a property that it possesses. It's kinda like saying "a shirt is a collar" - shirts HAVE collars, and they are part characterizing the properties of shirts, but they are certainly not the whole story.

      Example: Colors. We have been taught that the primary colors are red (R), blue (B) and yellow (Y). What about the set {R,B,Y,G}? Can we represent any color we want by taking appropriate proportions (linear combinations) of these colors? YES! Is this set a basis? NO! While it spans the set (we can make any color), it is not linearly independent, since green is some combination of blue and yellow - G in this set is "extra." How can we make the set linearly independent? Remove green.

      What about the set (P=purple) {B,Y,P}? Does this set constitute basis? NO! We are missing red, and therefore cannot form, e.g., orange.

      Try a few examples on your own involving colors, then try a few more complicated ones using the basis vectors in R2 (1,0) and (0,1).
      (8 votes)
  • male robot donald style avatar for user Chris Brackamonte
    how can just 2 3D vectors span column space of A? From my understanding, we need 3 3D vectors to span the entire R3. If only 2 3D vectors form the basis of column space of A, then the column space of A must be a plane in R3. The other two vectors lie on the same plane formed by the span of the basis of column space of A. Am I right ?
    (6 votes)
    Default Khan Academy avatar avatar for user

Video transcript

What I want to do in this video -- and it'll probably occur over several videos -- is really integrate everything we know about matrices, and null spaces, and column spaces, and linear independence. So I have this matrix here, this matrix A. And I guess a good place to start is, let's figure out its column space and its null space. The column space is actually super easy to figure out. It's just the span of the column vectors of A. So we can right from the get-go write that the column space of our matrix A-- Let me do it over here. I can write the column space of my matrix A is equal to the span of the vectors 1, 2, 3. 1, 1, 4. 1, 4, 1. And 1, 3, 2. I'm done. That was pretty straightforward, a lot easier than finding null spaces. Now this may or may not be satisfying to you. And there's a lot of open questions. Is this a basis for the space, for example? Is this a linear independent set of vectors? How can we visualize this space? And I haven't answered any of those yet. But if someone just says, hey what's the column space of A? This is the column space of A. And then we can answer some of those other questions. If this is a linearly independent set of vectors, then these vectors would be a basis for the column space of A. We don't know that yet. We don't know whether these are linearly independent. But we can figure out if they're linearly independent by looking at the null space of A. Remember these are linearly independent if the null space of A only contains the 0 vector. So let's figure out what the null space of A is. And remember, we can do a little shortcut here. The null space of A is equal to the null space of the row, the reduced row echelon form of A. And I showed you that when we first calculated the null space of a vector, because when you performed these -- essentially if you want to solve for the null space of A, you create an augmented matrix. And you put the augmented matrix in reduced row echelon form, but the 0's never change. So essentially you're just taking A and putting it in reduced row echelon form. Let's do that. So I'll keep row one the same, 1, 1, 1, 1. And then let me replace row two with, row two minus row one. So what do I get? No, actually I want to zero this out here. So row two minus, 2 times row one. Actually even better because I eventually want to get a 1 here. So let me do 2 times row one, minus row two. So let me say 2 times row one, and I'm going to minus row two. So 2 times 1 minus 2 is 0, which is exactly what I wanted there. 2 times 1 minus 1 is 1. That's nice to have right there. 2 times 1 minus 4 is minus 2. 2 times 1 minus 3 is minus 1. All right, now let me see if I can zero out this guy here. So what can I do? I could do any combination, anything that essentially zeroes this guy out. But I want to minimize my number of negative numbers. So let me take this third row, minus 3 times this first row. So I'm going take minus 3 times that first row and add it to this third row. So 3 minus 3 times 1 is 0. These are just going to be a bunch of 3's. 4 minus 3 times 1 is 1. 1 minus 3 times 1 is minus 2. And 2 minus 3 times 1 is minus 1. Now if we want to get this into reduced row echelon form we need to target that one there and that one there. And what can we do? So let's keep my middle row the same. My middle row is not going to change. 1, 1, minus 2, minus 1. And to get rid of this one up here I can just replace my first row with my first row minus my second row. Because then this won't change. I'll have 1 minus 0 is 1. 1 minus 1 is 0. That's what we wanted. 1 minus minus 2 is 3. That's 1 plus 2. 1 minus minus 1. That's 1 plus 1. That is 2. Fair enough? Now let me do my third row. Let me replace my third row with my third row subtracted from my first row. They are obviously the same thing. So if I subtract the third row from the second row I'm just going to get a bunch of 0's. 0 minus 0 is 0. 1 minus 1 is 0. Minus 2 minus minus 2 is 0. And minus 1 minus minus 1. That's minus 1 plus 1. That's equal to 0. And just like that we have it now in reduced row echelon form. So this right here is the reduced row echelon form of A. That straightforward. Now the whole the reason why we even went through this exercise is we wanted to figure out the null space of A. And we already know that the null space of A is equal to the null space of the reduced row echelon form of A. So if this is the reduce row echelon form of A, let's figure out its null space. So the null space is the set of all of vectors in R4, because we have 4 columns here. 1, 2, 3, 4. The null space is the set of all of vectors that satisfy this equation, where we're going to have three 0's right here. That's the 0 vector in R3, because we have three rows right there, and you can figure it out. This times this has to equal that 0. That dotted with that essentially is going to equal that 0. That dotted with that is equal to that 0. I say essentially because I didn't define a row vector dot a column vector. I've only defined column vectors dotted with other column vectors. But we've been over that in a previous video, where you can say this is a transpose of a column vector. So let's just take this, and write a system of equations with this. So we get 1 times x1. So this times this is going to be equal to that 0. So one times x1, that is x1. Plus 0 times x2. Let me just write that out. Plus 3 times x3. Plus 2 times x4 is equal to that 0. And then -- I'll do it in yellow right here -- I have 0 times x1. Plus 1 times x2. Minus 2 times x3. Minus x4 is equal to 0. And then this gives me no information. 0 times all this is equal to 0. So it just turns into 0 equals 0. So let's see if we can solve for our pivot entries, or our pivot variables. What are our pivot entries? This is a pivot entry. That's a pivot entry. That's what reduced row echelon form is all about, getting these entries that are 1 and they're the only non-zero term in their respective columns. And that every pivot entry is to the right of a pivot entry above it. And then the columns that don't have pivot entries? These columns represent the free variables. So this column has no pivot entry. And so when you take the dot product, this column turned into this column in our system of equations. So we know that x3 is a free variable. x3 is free. We can set it equal to anything. Likewise x4 is a free variable. X1 and x2 are pivot variables, because their corresponding columns in our reduced row echelon form have pivot entries in them. Fair enough. So let's see if we can simplify this into a form we know. And we've seen this before. So if I solve for x1 -- this 0 I can ignore. That 0 I can ignore -- I could say that x1 is equal to minus 3x3 minus 2x4. I just subtracted these two from both sides of the equation and I can say that x2 is equal to 2x3 plus x4. And if we want to write our solution set now, so if I wanted to find the null space of A, which is the same thing as the null space of the reduced row echelon form of A, is equal to all of the vectors -- let me do a new color. Maybe I'll do blue -- is equal to all of the vectors x1, x2, x3, x4 that are equal to -- So what are they going to be equal to? X1 has to be equal to minus 3x3 minus 2x4. Just to be clear, these are free variables because I can set these to be anything. And these are pivot variables because I can't just set them to anything. When I determine what my x3's and my x4's are, they determine what my x1's and my x2's have to be. So these are pivoted variables. These are free variables. I can make this guy pi. And I can make this guy minus 2. We can set them to anything. So x1 is equal to -- let's see, let me write it this way -- they're equal to x3 -- let me do it in a different color -- do x3 like this. So it's equal to x3 times some vector plus x4 times some other vector. So any solution set in my null space is going to be a linear combination of these two vectors. We can figure out what these two vectors are just from these two constraints right here. So -- let me do it in a neutral color -- x1 is equal to minus 3 times x3 minus 2 times x4. Straightforward enough. x2 is equal to 2 times x3 plus x4. What's x3 equal to? Well x3 is equal to itself. Whatever we set x3 equal to, that's going to be x3. So x3 is going to be 1 times x3 plus 0 times x4. It is not going to have any x4 in it. X3 is going to be kind of an independent variable. It's going to be free. We can set whatever it is. We set it and then that's going to be x3 in our solution set. x4 is not going to have any x3 in it. It's just going to be 1 times x4. And so our null space is essentially all of the linear combinations of these two vectors. This can be any real number. This is just any real number and x4 is just a member of the real space. So all of these, the set of all of the valid solutions to Ax is equal to 0 -- where did I write that. Did I even write that down? No I haven't even written that anywhere. The set of all Ax is equal to 0, where this is my x, it equals all the linear combinations of this vector and that vector right there. And we know what all of the linear combinations mean. It means my null space is equal to the span of these two guys, the span of minus 3, 2, 1, 0. And minus 2, 1, 0, 1. Now let me ask you a question. Are the columns in A, are they a linearly independent set? Are they a linearly independent set? So if we write these vectors right there, these are the column vectors of A. So let me write that down. So are the column vectors of A -- so what were they? Let's see. 1, 3, 2. No it's 1, 2, 3. 1, 1, 4. 1, 4, 1. And 1, 3, 2. So this is just the column vectors of A. I could just write A is just this much of columns, but my question is, is this a linearly independent set? And here you might immediately start thinking, well when we said that something is linearly independent -- so linearly independence implies that there's only one solution -- we saw this I think two videos ago, that there's only one solution -- one solution to Ax is equal to 0. And that is the 0 solution, that x is equal to the 0 vector. Or another way to say that is that the null space of my matrix A is equal to just the 0 vector. That's what linear independence implies. And it goes both ways. If my null space is just a 0 vector, then I know it's linearly independent. If my null space includes other vectors, then I am not linearly independent. Now my null space of A, what does it include? Is it just the 0 vector? Well, no it includes every linear combination of these guys. It includes actually an infinite number of vectors, not just one solution. Obviously 0 vector is contained here, if you just multiply both of these -- if you pick 0 for that and that. It's contained, but you can get a whole set of vectors. So because the null span of A, the null space, sorry, the null space of A does not just contain the 0 vector. So it has more than just 0. So what does that mean? Well that means that there's more than one solution to this. And that means that this is a linearly dependent set. And what does that mean? At the very beginning of the video I said, what's the column space of A. And we said, the column space of A is just the span of the column vectors. I just wrote it out like that. And I said, well it's not clear whether this is a valid basis for the column space of A. And what's a basis? A basis is a set of vectors that span a subspace, and they are also linearly independent. And we just showed that these guys are not linearly independent. So that means that they are not a basis for the column space of A. They do span the column space of A, by definition really. But they're not a basis. They need to be linearly independent for them to be a basis. So let's see if we can figure out what a basis for this column space would be. And to do that we just have to get rid of some redundant vectors. If I can show you that this guy can be represented by some combination of these two guys, then I can get rid of that guy. He's not adding any new information. Same with that guy. Who knows? So let's see if we can figure this piece of the puzzle out. So we know already that x1, let me write it this way, that x1 times -- Maybe I'll just kind of leave you hanging and continue this in the next video. But we know that x1 times 1, 2, 3. Plus x2 times 1, 1, 4. Plus x3 times 1, 4, 1. Plus x4 times 1, 3, 2. We know that this is equal to 0. Now if we are able to solve for x4 in terms of -- let me just think that I can solve for the vectors that are associated with my free variables using the other vectors. Let me see if I can do that. And you'll see it's actually pretty straightforward. So let's say I want to solve for x4. So if I subtract this from both sides of this equation, I get what? Let me put it this way, let me set x3 equal to 0. It was a free variable. I can do that. So if I set x3 is equal to 0, then what do I get here? If I said x3 equals 0, this guy disappears. And if I subtract this from both sides of this equation, I get x1 times 1, 2, 3. Plus x2 times 1, 1, 4. Is equal to -- I'm just setting x3 equal to 0. That was a free variable. So I'm setting x3 equal to 0. So this whole thing disappears. So that is equal to minus x4 times 1, 3, 2. Now I set x3 equal to 0. Let me set x4 to be equal to minus 1. If x4 is equal to minus 1, what is minus x4? Well then this thing will just be equal to 1. And I'll have x1 times 1, 2, 3. Plus x2 times 1, 1, 4 will equal this fourth vector right here. And can I always find things like this? Well sure I can actually find the particular ones. If x3 is equal to 0, and x4 is minus 1 -- Let me copy and paste this that I have up here -- Let me scroll down a little bit. This is what we got when we figured out our null space, right there. So if I'm setting -- remember these are the free variables -- if I set x3 equal to 0 and x4 is equal to minus 1, what is x1? Then this will imply that x1 is equal to minus 3 times x3, that's just 0, minus 2 times x4. If x4 is minus 1, minus 2 times minus 1, x1 will equal 2. And then what will x2 be equal to? x2 is equal to 2 times x3, which is 0, plus x4. So it's equal to minus 1. So I just showed you that if I set this equal to 2 and this equal to minus 1, I have a linear combination of this vector and this vector that can add up to this fourth vector. And you can even verify it. 2 times 1 minus 1 is equal to 1. 2 times 2 minus 1 is equal to 3. 2 times 3 is 6, minus 4 is equal to 2. So it checks out. So I just showed you using, really, our definitions looking at what were our free variables versus our pivot variables. We were able to show you, kind of just very simply solve for this third, this fourth vector, in terms of these first two. So we know, if we go back to the set that this fourth vector is really unnecessary, really not adding anything to the span of the set of vectors. Because this guy can be written as a combination of this guy and this guy. Now let's see if this guy, this third guy, we can do the same exercise. This is also dictated by a free variable. So let's see if I can write him as a combination of these first two. Well we'll do the exact same thing. Instead of setting x3 equal to 0 and x4 equal to minus 1, let us set x4 is equal to 0 because I want to cross that out. And let me set x3 is equal to minus 1. If x3 is equal to minus 1, what does this equation reduce to? We get x1 times 1, 2, 3. Plus x2 times 1, 1, 4. Is equal to -- if this is minus 1 times 1, 4, 1. And then we add it to both sides of this equation, we get plus 1 times 1, 4, 1. And once again we can just solve for our x1 and x2. If x4 is 0 and x3 is minus 1, then x1 x4 is 0. So x3 is just minus 3, times x3, so x1 would be equal to 3, right? Minus 3 times minus 1. And what would x2 be equal to? x4 is 0, we can ignore that. x2 would be equal to minus 2. So this would be 3, and then this would be minus 2. Let's see if it works out. 3 times 1 minus 2 is 1. 3 times 2 minus 2 is 4. 3 times 3 minus 8 is 1. It checks out. So I'm able to write this vector, that was associated with the free variable, as a linear combination of these two. So we can get rid of him from our set. So now I've shown that this guy can be written as a linear combination of these two. This guy can be written as a linear combination of these two. So the span of all of those guys should be equal to the span -- So let me write it this way. The column space of A, I can now re-write. Before it was the span of all of those vectors. It was the span of all of the column vectors, v1, v2, v3, and v4. Now I just showed you that v3 and v4 can be rewritten in terms of v1 and v2. So they're redundant. So that is equal to the span of v1 and v2 which are just those two vectors. Vector 1, 2, 3, and vector 1, 1, 4. Now are any of these guys redundant? Can I express one of them as a linear combination of the other? Essentially when I'm talking about the linear combination of only one other vector it's just multiplying it by a scalar. Well let's think about that. There are multiple ways you can show this, but the easiest way is well look, to go from this entry to that entry I'm just multiplying by 1. But if I multiply this whole vector times 1, then I'm going to get a 2 here and I'm going to get a 3 here. So it won't work. If I want to represent this guy as a scalar multiple of that guy, so any scalar multiple of 1, 2, 3 is going to be equal to 1c, 2c, 3c. Right? And so we're saying this guy has to be represented somehow like that, if we say that this guy is somehow a scalar, somehow can be represented by that guy. So that would have to be equal to 1, 1, 4. When you look at this top entry it implies that c would have to be equal to 1. But when you look at this second entry you think that c would have to be equal to 1/2. So you get a contradiction. Over here c would have to be equal to 4/3. So there's no c where this will work. There's no multiple of c. And you can work that both ways. So there's no way that you can represent one of these guys as a linear combination of the other. And you can actually prove other ways, maybe more formally, that this is linearly independent. But given that this is linearly independent -- I think you're satisfied with that -- we can then say that the set of vectors 1, 2, 3, and 1, 1, 4, this is a basis for the column span of A. Now I'm going to let you go in this video because I think I've gone well over time. But what I'm going to do in the next few videos is now that I've established that this is a basis for the column span of A, we can attempt to visualize it. Because we can say that the column span of A is equal to the span of these two vectors. And we can think about what the span of those two vectors are. We're going to see that it's a plane in R3. Span of 1, 1, 4. And this is a quick reminder, I've said a couple times. When I say it's a basis all I'm saying is that these guys, they both span the column space of A. When I had four vectors, they also spanned the column space of A. But what makes them a basis is that these guys are linearly independent. There's no extra information, or redundant vectors that can be represented by other vectors within the basis. They are linearly independent. Anyway, I'll let you go for now.