If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Showing relation between basis cols and pivot cols

Showing that linear independence of pivot columns implies linear independence of the corresponding columns in the original equation. Created by Sal Khan.

Want to join the conversation?

  • leaf green style avatar for user Hải Đăng Trần
    I was confused the different between rank(A) and dim(A), why they are not set in one defination?
    (4 votes)
    Default Khan Academy avatar avatar for user
    • aqualine ultimate style avatar for user Kyler Kathan
      "dim" is a function which inputs a subspace and outputs an integer.
      "rank" is a function which inputs a matrix and outputs an integer.
      "dim" returns the dimensionality of a subspace. So for example, a subspace which takes the shape of a plane would return 2.
      "rank" returns the dimensionality of the subspace generated as the span of column vectors. In general:
      Given:
      A = [a₁ a₂ ... an](lower-case a's are column vectors)
      Then:
      rank(A) = dim(span({a₁, a₂, ... an}))
      Also note, "span" inputs a set of vectors and outputs a subspace.
      (18 votes)
  • blobby green style avatar for user djohnston2014
    A*c = R*c = 0 = a1c1 + a2c2 + a4c4 = r1c1 + r2c2 + r4c4 = 0. For the RHS, ci = 0 by linear independence of ri's. Suppose there exists a vector c' s.t. Ac' = 0, but not all ci's ≠ 0. This new vector, c', by definition would be in the null space of A. However, since Rc' ≠ 0, c' would not be in the null space of R. This is a contradiction. Therefore, the only soln. to the left hand side is for ci = 0. I didn't understand this until I made this explanation, so I'll post it here in case it's useful.
    (7 votes)
    Default Khan Academy avatar avatar for user
    • spunky sam blue style avatar for user ramkumar venkatachalam
      exactly .. I too didn't get the proof initially but just like you worked it out and finally got it. for those who are still struggling try re-watching the video from mark.

      there are only 5 steps
      1) the pivot columns in reduced row echelon form are linearly independent ( because the ones (ie "0 1 0 0") in each column can't be made from the other columns)

      2) the solution space i.e all the solutions to the equation Rx=0 and Ax=0 are the same . (as R is just the reduced form of A)

      3) then lets just set the coefficients of R3,R5 (free columns) to zero ,and find a solution to the equation Rx=0. from step 1 we know that the remaining columns are linearly independent and so the only solution to the equation is setting the remaining coefficients also to zero

      4)from step 3 we see the solution space doesn't contain any other coefficients to R1,R2,R4 except 0 if we set the coefficients of R3,R5 to 0

      5)from step 2, the solution to AX=0 if we set A3,A4 to zero should be [0,0,0,0,0]T as well and from that we can see c1*A1+c2*A2+c3*A4=0 has only one solution. (this is the condition necessary to be linearly independent)

      6)the remaining part of the proof is about proving that the set of linearly independent vectors span the null space. which is done in the next video
      (5 votes)
  • male robot hal style avatar for user William Ortez
    why do I feel like a lot of this is just going around in circles?
    (4 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user JoshNicholas24
    General Question; By putting the matrix into REF instead of RREF, would you still obtain the linearly independent columns?
    (4 votes)
    Default Khan Academy avatar avatar for user
  • mr pink red style avatar for user Enya Hsiao
    I don't understand, at around when Sal is explaining vector [c1 c2 0 c4 0] he said that c1,c2,c4 must be zero showing linear independence, now aren't the third and fifth term also zero? If c1 c2 c4 are zero then the solution would be [0 0 0 0 0], doesn't that imply all column vectors to be linearly independent?
    (2 votes)
    Default Khan Academy avatar avatar for user
    • leaf green style avatar for user Gobot
      He says that the 1st, 2nd and 4th c must be 0 for the result to be the 0 vector, and hence are linearly independent.

      Whereas, the 3rd and 5th just happen to be zero.. but since these vectors can be made from the other vectors, you would also be able to find a way to get back to 0 even if these happened not to be. (He mentions showing this in the next video).
      (5 votes)
  • blobby green style avatar for user Mohamad Shebli
    If two square nxn matrices are from rank n how can i show that their sum is also from rank n (without using characteristic polynomial ) ??
    (2 votes)
    Default Khan Academy avatar avatar for user
  • aqualine ultimate style avatar for user Shankar Kolluru
    Sal, why is there 0 in 3rd and 5th rows in matrix x, which is multiplied by r? This happens at around
    (2 votes)
    Default Khan Academy avatar avatar for user
    • piceratops sapling style avatar for user Bryan O
      By reducing the matrix to reduced row echelon form, we found the pivot columns. In R, the pivots are columns 1, 2, and 4. This tells us that the other two columns (3 and 5) can be expressed as linear combinations of the pivot columns. You can picture this by imagining each vector as a line. If, for example, r4 could be expressed as 2*r1, then that simply means that r4 is on the same line as r1. The idea remains the same if it is a more complex combination involving two or more vectors.
      Now, if we are looking at the span of the vector space (which we are), then these vectors can simply be omitted. This becomes obvious (I hope) when thinking of the lines (or planes, etc. for higher dimensions). If r4 is on the same line as the other vectors, it is included in the span of those vectors.
      So, to solve this equation more simply, we set these free variables to 0. If you remember, the values of the free variables can be set arbitrarily, but the set the values of the other vectors. In this example, if we used a value other than 0, we would have to find linear combinations of the pivot columns to cancel out the values in the non-pivot columns (which can be done! try it if you don't believe me!). So 0 is picked as a value to simplify calculation, but it does not affect the end result.
      (2 votes)
  • blobby green style avatar for user a.somjp
    can some explain again why C1,C2 and C4 has to be zero to be L.Ind? this is what confuses me about L.dep and L.ind.
    (2 votes)
    Default Khan Academy avatar avatar for user
    • female robot grace style avatar for user loumast17
      If C1, C2 and C4 did not all equal zero, but some could still be some non zero number to make C1*R1 + C2*R2 + C4*R4 = 0 this would imply one of those R vectors is a multiple of at least one of the other two. Maybe some examples.

      starting with vectors of length 2. <1,0> and <0,1> are the most basic linearly independent vectors. ANY other vector with two elements will not be linearly independent to both. What this means is that you could multiply <1,0> and <0,1> each by some number to get any other vector in R2. let me know if that is not clear.

      Now lets look at <1,0> being x1 and <0,1> being x2. More related to your question we would set up C1*x1 + C2*x2 = 0. Now, is there any combination of C1 or C2 other than both equaling 0 where we could make this linear combination equal the zero vector. So in this case <0,0> There is not. Then it is this thinking that carries on.

      If you have n linearly independent vectors there is no way to make a linear combination of them so that you get the 0 vector in the end other than making them all be multiplied by 0. So in the video the rref vctors were <1,0,0,0>, <0,1,0,0>, <0,0,1,0>. now, could you make a linear combination of them to get the 0 vecotr, int his case <0,0,0,0>

      If you are having trouble understanding why these three vectors dictate the whole matrix I can explain that too, just let me know
      (1 vote)
  • female robot amelia style avatar for user Siddharth Kadu
    Is it right to set R3 & R5 = 0 and prove that other vectors in the set are linearly independent?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user sarasethia1598
    Sal mentioned 'particular solution' at . I don't understand what exactly is the significance of a particular solution and what is the difference between a general solution and a ​particular solution?
    (1 vote)
    Default Khan Academy avatar avatar for user

Video transcript

In the last video we saw a method of figuring out what the basis for column space is. And we use these exact examples. Whereas this was matrix A, I just took it and I put it in reduced row echelon form. And I figured out which of these columns in my reduced row echelon form of A, are pivot columns. And it turned out to be the first one, the second one, and the fourth one. And then the method is, you say look, the corresponding columns in A-- so the first one, the second one, and the fourth one-- form my basis for my column space. And since they form the basis, and if you want to know the dimension of your basis of your column space, which is also called the rank, you just say, well there's three in there. So it has a rank of one, two, three. In this video I want to discuss a little bit about why this worked. Why were we able to just take the corresponding columns? Why did linear independence of these three guys, imply linear independence of these three guys? Why was the fact that I can represent these guys-- this guy right here as a linear combination of these three, or this guy as a linear combination of these three-- why does that imply that I can construct this guy as a linear combination of my basis vectors? So the first thing that wasn't too much of a stretch of the imagination in the last video, was the idea that these pivot vectors are linearly independent. So r1, r2, and r4. And everything I'm doing, I'm kind of applying to the special case just so that it's easier to understand. But it should be generalizable. In fact, it definitely is generalizable. That all of the pivot columns in reduced row echelon form are linearly independent. And that's because the very nature of reduced row echelon form, is that you are the only pivot column that has a 1 in that respective row. So the only way to construct it is with that vector. You can't construct it with the other pivot columns because they're all going to have 0 in that row. And when I say it's linearly independent, I'm just saying the set of pivot columns. So let me say this in general. The set of pivot columns for any reduced row echelon form matrix is linearly independent. And it's just a very straightforward argument. Because each column is going to have a 1 in a very unique place. All of the other pivot columns are going to have a 0 in that same place. And so you can't take any linear combinations to get to that 1 because 0 times anything, minus or plus 0 times anything, can never be equal to 1. So I think you can accept that. Now, that means that the solution to c1 times r1, plus c2 times r2, plus, let me say, c4 times r4. The solution to this equation, because these guys are linearly independent, we know that this only has one solution, and that's c1, c2, and c4 is equal to 0. That's the only solution to that. So another way we could say it is, if we write r times some vector x-- well I'll just write it times this particular x-- where I write it as c1, c2, 0, c4, and 0 is equal to 0. So this will be some special member of your null space. It's a particular solution to the equation. This is equal to one, two, three, four 0's because we have four rows here. Now, if we just expand this out. If we just multiply 1 times c1, plus 0 times c2, minus 1 times 0, plus 4 times 0, you'll get-- or actually a better way to explain it-- this multiplication right here can be written as-- and we've seen this multiple times-- c1 times r1, plus c2 times r2, plus 0 times r3. So we could just ignore that term, plus c4, c4 times r4, plus 0 times r5. That's r5 right there. All of that equal to 0. So the only solution to this, because we know that these three columns are linearly independent-- or the set of just those three columns, those three pivot columns are linearly independent-- the only solution here is all of these equal to 0. That's exactly what I said right up here. So the only solution here, where if these two are 0, is that these guys also all have to equal 0, if I already constrain these two. Now, the one thing that we've done over and over again, we know that the solution set of this equation, the solution set of Rx is equal to 0, is the same as the solution set of Ax is equal to 0. Now, how do we know that? Or what do I mean? Well the solution set of this is just the null space. The solution set is just the null space of r. It's all of x's that satisfy this equation. And we know that is equal to the null space of a, because r is just a in reduced row echelon form. So this is the null space of a, is all of the x's that satisfy this equation. Now, the only version of this that satisfied this equation was when c1, c2, and c4 are equal to 0. So that tells us that the only version of this, c1, c2, 0, c4, 0, that satisfies this equation, or this equation, is when c1, c2, and c4 is equal to 0. Or another way of saying that it if this is vector a1, a2, a4 right here, if you multiply this out, you get c1-- let me do it over here, let me do it in blue-- you get c1 times a1 plus c2 times a2, and then 0 times a3, plus c4 times a4 is equal to 0. Now these guys are going to be linearly independent, if and only if the only solution to this equation is they all equal to 0. Well we know that the only solution to this is that they all equal 0 because anything that's a solution to this is a solution to this. And the only solution to this was, if I go ahead and I constrain these two terms to being equal to 0, the only solution to this is all of these c's have to be 0. So likewise, if I constrain these to be 0, the only solution to this is that c1, c2, and c4 have to be 0. So those guys have to be 0, which imply that these three vectors, a1, a2, and a4, so that implies that the set a1, a2, and a4 are linearly independent. So we're halfway there. We've shown that because the pivot columns here are linearly independent. We can show and they have the same solution set. The null space of the reduced row echelon form is the same as the null space of our original matrix. We were able to show that the only solution to c1 times this plus c2 times this plus c4 times this is when all the constants are 0, which shows that these three vectors or a set of those three vectors are definitely linearly independent. Now, the next thing to prove that they are a basis, is to show that all of the other column vectors can be represented as multiples of these three guys. And I realize, just for the sake of clarity, or maybe not boring you too much, I'll do that in the next video. So in this one we saw that if the pivot columns are linearly independent, they always are. All pivot columns, by definition are linearly independent. Or the set of pivot columns are always linearly independent when you take away the non-pivot columns, then the corresponding columns in your original vector are also linearly independent. In the next one we'll show that these three guys also span your column space.