If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Rowspace and left nullspace

Rowspace and Left Nullspace. Created by Sal Khan.

Want to join the conversation?

  • leaf green style avatar for user anzatzi
    Its unclear to me why the dimC(A) = dimR(A)? Its clear that this is the case--but i've researched this a bit and haven't found the why this is the case. thanks
    (8 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user InnocentRealist
      When you row reduce A, all the rows left in the rref are linearly independent, otherwise one of the remaining rows would be a multiple of another row, and could be eliminated. But the number of rows in the rref is the same as the number of pivot columns, which is the same as the number of linearly independent column vectors.

      The row vectors and column vectors of A are respectively the column vectors and row vectors of A transpose, so, as above, the number of linearly independent rows in A transpose is the same as the number of its linearly independent columns. Therefore Rank(A transpose) = Rank(A).
      (10 votes)
  • blobby green style avatar for user frank_niz
    I am having some trouble figuring out a couple proof type problems in this area of intro linear algebra. For example, right now I have to prove that for any matrix A if vector u is in RowA and vector v is in NullA, then product of u(transpose) and v is 0. Or that if u belongs to both RowA and NullA, then u equals the zero vector.
    (4 votes)
    Default Khan Academy avatar avatar for user
  • hopper cool style avatar for user TheHarlequinr
    Is "nullspace" the same thing as the span of the "kernel"?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Marcello Cruz
    Between and Sal explains how A^tx=0 is equivalent to x^TA=0^t. This sounds very confuse because the left part of the equivalence creates a 2x1 matrix and the second part creates a 1x3 matrix. The left part i just need to equal to zero, but what should I do with the right part?
    Also, just to be sure, a transpose of zero matrix is zero right?
    (1 vote)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Yamanqui García Rosales
      A is a n⨉m matrix, so Aᵀx⃗ is a m⨉1 column vector. (since Aᵀ is a m⨉n matrix and x⃗ is a m⨉1 column vector.) Therefore in the equation Aᵀx⃗ = 0⃗, the 0⃗ is the zero m⨉1 column vector.

      Now, the equation x⃗ᵀA is a 1⨉m row vector (since x⃗ᵀ is a 1xm row vector), and therefore 0⃗ᵀ is also the zero 1⨉m row vector.
      (5 votes)
  • hopper cool style avatar for user Cole Wyeth
    At about , Sal says that x1 is the pivot entry. But couldn't we have solved for any of the other x's also, and then expressed them in the same way?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user manu.kamin
    For a system of equations Ax=b, how do you prove that the solution for x lies in the row space of A? I tried pondering over this, in vain.
    (2 votes)
    Default Khan Academy avatar avatar for user
  • mr pants teal style avatar for user SmoothHarmony
    If Rank(A)=Rank(A Transpose), then that should mean dim( (C(A) )= dim ( C(A Transpose) ). If we have, say, 2 pivots then that means we have two Lin. Ind. rows. In that case dim ( C(A transpose) )= 2. The two independent rows form a basis for the column space of A transpose.
    But if we have 2 pivots, then we also have two Lin. Ind. columns and dim ( C(A) )= 2.The two independent columns form a basis for the column space of A. So dim ( C(A) ) =2= dim ( C(A Transpose) ). Is this reasoning correct?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • marcimus pink style avatar for user Aiza
    I used x2 as the pivot variable and got the basis of c(a) right but not the null space. Can there not be different free variables in one matrix?
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user john johnsson
    i have a question about taking the transpose of both sides of a equation as sal did. around 15-16 minutes in the video! I can't really see how it's equivivalent because i tried a general case by myself a 2x3 matrix (a,b,c, d,e,f,) then transpose it and get the 3x2((a,b,c)(d,e,f,))(X1,X2X3)=(000) columnvector3x1 which but when i reversedproduct it because i now transposed both sides of the equation i got (X1X2X3)(abc defg)= (000)1x3

    and i'm wondering how can we say that a 2x1=1X3 matrix and what does transposing both sides really mean
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Kyle Delaney
    At does he mean the 0 vector in R2?
    (1 vote)
    Default Khan Academy avatar avatar for user

Video transcript

I've got this matrix, A, here, it's a 2 by 3 matrix. And just as a bit of review, let's figure out its nullspace and its columnspace. So the nullspace of A is the set of all vectors x that are member of-- let's see we have 3 columns here-- so a member of R3, such that A times the vector are going to be equal to the 0 vector. So we can just set this up. Let me just-- we just need to figure out all of the x's that satisfy this in R3. So we take our matrix A 2, minus 1, minus 3, minus 4, 2, 6. Multiply them times some arbitrary vector in R3 here. So you get x1, x2, x3. And you set them equal to the 0 vector. It's going to be the 0 vector in R2. Because we have 2 rows here. You multiply a 2 by 3 matrix times a vector in R3, you're going to get a 2 by 1 vector or 2 by 1 matrix. So you're going to get the 0 vector in R3. And to solve what is essentially a system of equations-- you get 2 x1 minus x2 minus 3 x3 is equal to 0 and so on and so forth. We can just set up an augmented matrix. So we can just set up this augmented matrix right here. 2 minus 1 minus 3 minus 4, 2, 6. And then augment it with what we're trying to set it equal to to solve the system. And you know we're going to perform a bunch of row operations here to put this in reduced row echelon form. And they're not going to change the right-hand side of this augmented matrix. And that's essentially the argument as to why the nullspace of the reduced row echelon form of A is the same thing as the nullspace of A. But anyway, that's just a bit of review. So let's perform some row operations to solve this a little bit better. So, the first thing I might want to do is divide the first row by 2. So if I divide the first row by 2 I get a 1 minus 1/2 minus 3/2 and then of course 0 divided by 2 is 0. And let's just divide this row right here by-- I don't know just to simplify things-- let's divide it by 4. So I'm doing two row operations in one step. And you can do that. I could have done it in two separate steps. So if we divide it by 4, this becomes minus 1, 1/2 and then you get 3/2 and then you get 0. And now, let's keep my first row the same. I'm going to keep my first row the same. It's 1 minus 1/2 minus 3/2 and of course the 0 is the right-hand side. Now let's replace my second row with my second row plus my first row. So these are just linear operations on these guys. So negative 1 plus 1 is 0. 1/2 plus minus 1/2 is 0. 3/2 plus minus 3/2 is 0. And of course, 0 plus 0 is 0. So what are we left with? we're left with this right here. This is another way of saying that x1-- let me write it this way-- x1-- I guess the easiest way to think about it is-- you're multiplying the reduced row echelon form of A now. 1 minus 1/2 minus 3/2. You have a bunch of 0's here. Times x1, x2, x3 is equal to the R2 0 vector. This is another interpretation of this augmented matrix. So this is just saying, this is useless. This is saying 0 times that plus 0 times that plus 0 times that is equal to 0. So it's giving us no information. But this first row tells us that-- let me switch colors-- 1 times x1 minus 1/2 times x2 minus 3/2 times x3 is equal to 0. All of the vectors whose components satisfy this are in my nullspace. If I want to write it a little bit differently I could write as, x1 is equal to 1/2 x2 plus 3/2 x3. Or if I wanted to write my solution set in vector form, I could write that my nullspace is going to be the set of all the vectors x1, x2, x3 that satisfy these conditions. That are equal to what? Well, x2 and x3 are free variables. They're associated with the non-pivot entries, or the non-pivot columns in our reduced row echelon form. That is a pivot column right there. So let me write it this way. It's going to be x2 times something plus x3 times something. Those are my two free variables. And we have here, x1 is 1/2 x2. 1/2 times x2 plus 3/2 times x3. x2 is just going to be x2 times 1 plus 0 times x3. x3 is going to be 0 times x2 plus 1 times x3. So, our nullspace, these can be any real numbers right here. They're free variables. So our nullspace is essentially all of the linear combinations of this guy and that guy. Or another way to write it, the nullspace of A is equal to the span, which is the same thing as all the linear combinations of the span of 1/2, 1, 0. Notice these are vectors in R3. And that makes sense because the nullspace is going to be a set of vectors in R3. So it's the span of that. And that right there. So 3/2, 0, and 1. Just like that. And what is the columnspace of our original matrix, A? So the columnspace of A is equal to just, the subspace created by all of the linear combinations of these guys. Or essentially the span of the column vectors. Is equal to the span of 2 minus 4 minus 1, 2, minus 3, 6. These are all each separate vectors. So it's the span of these 3 vectors. Now, these guys might not be linearly independent. And actually, when you put this guy in reduced row echelon form, you know that the basis vectors for this are the vectors that are associated with our pivot columns. So we have one pivot column here. It's our first column. So we could say that we could use this as a basis vector. And it makes sense. Because this guy right here is minus 2 times this guy. This guy right here is minus 3/2 times that guy. So these two guys can definitely be represented as linear combinations of that guy. So it's equal to the span of just the vector, 2, minus 4. So if you were to ask me, and this is the basis for our columnspace. So if you wanted to know the rank-- and this is all a bit of review-- the rank of A is equal to the number of vectors in our basis for our columnspace. So it's going to be equal to 1. Now, everything I just did is a bit of review. But with the last couple of videos, we've been dealing with transposes. So let's actually figure out the same ideas for the transpose of A. So A transpose looks like this. A transpose is equal to the matrix 2, minus 1, minus 3 is the first column right there. And then the second column is going to be minus 4, 2 and 6. That is our transpose. So let's figure out the nullspace and the columnspace of our transpose. Let me put this in reduced row echelon form so we can get the nullspace. Let me get the nullspace of this guy. So we could do the exact same exercise. Let me write it this way. The nullspace of A transpose-- A transpose is a 3 by 2 matrix. So it's equal to all of the vectors, x, that are members of R2. Not R3 anymore-- because now we are taking the transpose's nullspace-- such that A transpose times R vectors are equal to the 0 vector in R3. And we can do that the same exact way we did before. We set up an augmented matrix. We could just put it in reduced row echelon form and set them all equal to 0. So let's just do that. So if we-- let me just put it in reduced row echelon form. So let me divide my first row by 2. Let's divide the first row by 2. I'm just going to put it in reduced row echelon form. The first row divided by 2 is 1 minus 2. And then the second row, let me divide it-- let me just, I'll just keep it the same-- so minus 1, 2. And then this last row, let me divide it by 3. So it becomes minus 1 and 2. And now, let me keep my first row the same. 1 minus 2. And now let me replace my second row with my second row plus my first row. So minus 1 plus 1 is 0. 2 plus minus 2 is 0. You get some 0's. I'm going to do the same thing with the third row. Replace it with it plus the first row. Once again you're going to get some 0's. So this is the reduced row echelon form of A transpose. And its nullspace is the same as A transpose's nullspace. We could say, to find this nullspace we can find all of the solutions to this equation times the vectors x1 and x2 is equal to 0, 0, and 0. These aren't vectors. These are just entries right here. 0, 0, 0. So these two lines give us no information, but this first one does. So we get 1 times x1-- and notice, this is the pivot column right here. It's associated. So x1 is going to be a pivot variable. x2 will be a free variable. And just to be clear that the first column is our pivot column. So if we go back to A transpose, it's this first column here that is associated with the pivot column. So when we talk about its columnspace, this by itself will span the columnspace. This is all a review of what we did before. We're just applying it to the transpose. Let's go back to our nullspace. So this tells us that 1 times x1, so x1, minus 2 times x2, is equal to 0. Or we could say that x1 is equal to 2 times x2. So all of the vectors in R2 that satisfy these conditions with these entries will be in the nullspace of A transpose. Let me write it this way. So the nullspace of A is going to be the set of all the vectors-- let me write it here-- the set of all the vectors, x1, x2, that are member of R2, clearly, such that x1, x2 is going to be equal to-- well, our free variable is x2-- so it's x2 times the vector. So x1 has to be 2 times x2. And obviously x2-- that's a 2-- is going to be 1 times x2. So what is this going to be? Well this is all of the linear combinations of this vector right here. So we could say it's equal to the span of our vector 2, 1. Now, that's the nullspace. Sorry, this was the nullspace of A transpose. I have to be very careful there. Now what is the columnspace? The columnspace of A tranpose? Well, the columnspace of A transpose is the set of all vectors spanned by the columns of A. So you could just say the span of this column vector and this column vector. But we know, when we put it into reduced row echelon form, only this column vector was associated with a pivot column. So this by itself, this guy is a linear combination of this guy. If you multiply him by minus 2, you get that guy right there. So it's consistent with everything we've learned. So it equals the span of just this guy right here. Of just the vector 2, minus 1, and minus 3. That's just a nice, neat exercise that we did. Notice that your span here, it's in R3, but it's just going to be a line in R3. Maybe in the next video I'll do a more graphical representation of it. But I did this whole exercise to introduce you to the ideas of the nullspace of your transpose and the columnspace of your transpose. Think about what the columnspace of your transpose is. It's the subspace spanned by that vector-- sorry-- spanned by this vector and that vector. And it turns out that this guy is a multiple of that guy. So we could say just by that guy. But these were the rows of our original matrix, A. So we could also view this as the span of the row vectors of our original guy. This is that column that is the basis for the column span of the R transpose matrix. And of course this guy was a linear combination of that. So we could also view the column span of our transpose matrix. It's equivalent to the subspace spanned by these rows. Or we could call that the row space of A. Let me write that down. So the columnspace of A transpose-- and this is just general. Let me write this generally. It doesn't just apply to this example. So the columnspace of the transpose of any matrix, this is called the rowspace of A. And it's a very natural name. Because if A's got a bunch of rows, we could call them the transpose of some vectors. So that's first row. You got the second row. All the way to, maybe, the nth row. Just like that. These are vector transposes. They're really just rows. If you imagine the space that's spanned by these vectors, by the different rows, that's essentially the columnspace of the transpose. Because when you transpose it, each of these guys become columns. So that's what the rowspace is. Now, the nullspace of our transpose-- let's write it like this-- it was all of the vectors x that satisfied this equation. Equals the 0 vector right there. Now, what happens if we take the transpose of both sides of this equation? Well, we've learned from our transpose properties, this is equal to the reverse product of each of those transposes. So this is going to be equal to, this is a vector, the vector x transpose. If this is a column vector before, now it's going to become a row vector. And then, times A transpose transpose. And that's going to be equal to the transpose of the 0 vector. Or, we could just write this like this. We could write this as some matrix-- well let me just write it like this. Some column vector x-- what's the transpose of A transpose? Well that's just equal to A. So you take the transpose of this column vector. You now get a row vector. You could view it as a matrix if you want. If this was a member of Rn, this is now going to be an 1 by n matrix. If this was a member of Rn. We kind of switched the orders. And we multiply it times the transpose of the transpose. We just get the matrix, A. And we set that equal to the transpose of the 0 vector. Now this is interesting. We now have it in terms of our original matrix, A. Now what did the nullspace of our matrix, A look like? The nullspace where all of the vectors x that satisfied this equation is equal to 0. So the x was on the right. So the nullspace is all the x's that satisfy this. The nullspace of our transpose is all of the x's that satisfy this equation. So let me say the set of all of the x's such that A transpose times x is equal to 0. That is the nullspace of A transpose. Or we could also write this as the set of all of the x's such that the transpose of our x times A is equal to the transpose of the 0 vector. And we have another name for this. This is called the left nullspace of A. Why is it called the left nullspace? Because now we have x on our left. In just a regular nullspace you have x on the right. But now, if you take the nullspace of the transpose, using just our transpose properties, that's equivalent to this transpose vector right here. Actually let me write this. The transpose right there. This transpose vector multiplying A from the left-hand side. So all of the x's that satisfy this is the left nullspace. And it's going to be different than your nullspace. Notice, your nullspace of A transpose was the span of this right here. This is also the left nullspace of A. Now what was just the regular nullspace of A? The regular nullspace of A was essentially a plane in R3. That's the nullspace of A. The left nullspace of A is just a line in R2. Very different things. And if you go to the rowspace, what is the rowspace of A? The rowspace of A is a line in R3. The rowspace of A is a line in R3. Well what is the columnspace of A? The columnspace of A, right here, where did I have it? Well, this is the only linearly independent vector. It was essentially a line in R2. So they're all very different things. And we'll study a little bit more how they're all related. Now there's one thing I want to relate to you. We figured out that the rank of this vector right here is 1. Because when you put it in reduced row echelon form there was one pivot column. And the basis vectors are those associated with that pivot column. And if you count your basis vectors, that's your dimension of your space. So the dimension of your columnspace is 1. And that's the same thing as your rank. Now what is the rank of A transpose? The rank of A transpose in the example, when you put it in reduced row echelon form, you got one linearly independent column vector. So the basis for our columnspace was also equal to 1. And in general, that's always going to be the case. That the rank of A, which is the dimension of its columnspace, is equal to the rank of A transpose. And if you think about it, it makes a lot of sense. To figure out the rank of A, you essentially figure out how many pivot columns they have. Or another way to say it is how many pivot entries they have. When you want to find the rank of your transpose vector, you're essentially just saying-- and I know this is maybe getting a little bit confusing-- but when you want the rank of your transpose vector, you're saying, how many of these columns are linearly independent? Or which of these are linearly independent? And that's the same question as saying, how many of your rows up here are linearly independent? If you want to know how many columns in your transpose are linearly independent, that's equivalent to asking how many rows in your original matrix are linearly independent. And when you put this matrix in reduced row echelon form, everything in reduced row echelon form are just row operations. So they are just linear combinations of these things up here. Or you could go vice versa. Everything up here is just linear combinations of your matrix in reduced row echelon form. So if you only have one pivot entry, then this guy right here, by himself, or one pivot row, that guy by himself can represent a basis for your rowspace. Or, all of your rows can be represented by a linear combination of your pivot rows. And because of that, you just count that. You say, OK there's one in this case. So the dimension of my rowspace is 1. And that's the same thing as the dimension of my transpose's columnspace. I know it's getting all confusing and it's late in the day for me as well. So that, hopefully, will convince you that the rank of our transpose is the same as the rank of our original matrix.