If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Showing that the candidate basis does span C(A)

Showing that just the columns of A associated with the pivot columns of rref(A) do indeed span C(A). Created by Sal Khan.

## Want to join the conversation?

• any sample exercises to try out? there are questions in the calculas section, not sure if there are any here
• Yea, not sure why the Linear Algebra section doesn't get practice problems and all other sections have problems! WE NEED SOME PRACTICE PROBLEMS SAL!
• At and , didn't Sal mean a1, a2 and a4 rather than a1, a2 and aN?
• I believe so. Notice that he switched back. His 4's do look like n's.
• I don't understand how this is a thorough proof that the pivot columns span all of the column space. In the proof, he sets the free variables to -1 and 0. Doesn't this mean the result is only valid in the special case where the free variables are -1 and 0?
• Not exactly. x3 and x5 being free variables just means that the equation x1a1 + x2a2 + x3a3 + x4a4 + x5a5 = 0 holds true for any value of x3 and x5, since the pivot entries (x1, x2, and x4) will change values depending on the values of x3 and x5 to make the equation still hold true. The relationship among the actual columns DOES NOT change just because you pick different values for the free variables. To make this more clear, consider the smaller example x1a1 + x2a2 + x3a3 = 0, where x3 is a free variable and the a's are the column vectors of some matrix A. Let x1 = 2x3 and x2 = -3x3. Obviously in this case, the pivot entries are x1 and x2 and the free variable is x3. We'll now consider two different values of x3 and see that the relationship among the columns remains the same.

Before doing this, we rewrite x1a1 +x2a2 + x3a3 = 0 by rewriting x1 and x3 in terms of x3:
2x3a1 - 3x3a2 + x3a3 = 0

Scenario 1: x3 = -1
2(-1)a1 - 3(-1)a2 + (-1)a3 = 0
-2a1 + 3a2 - a3 = 0
a3 = -2a1 + 3a2

Scenario 2: x3 = 2
2(2)a1 - 3(2)a2 + (2)a3 = 0
4a1 - 6a2 + 2a3 = 0
4a1 - 6a2 = -2a3
-2a3 = 4a1 - 6a2
a3 = -2a1 + 3a2

As you can see, changing the value of the free variable does not change the relationship/equation between the columns. Picking -1 and 0 in the video is just a convenient way to show that the columns corresponding to the free vectors CAN be written as some linear combination of the pivot vectors. If a vector can be written as a linear combination of some other vectors, then that vector is redundant in the set. It doesn't matter if you later pick different values for the free variables and demonstrate that the vector can ALSO be written as a linear combination of the pivot columns and the other free columns.
• Thank you for everything you guys do. But alas, I am wanting more. Is there a possibility that there be videos on abstract algebra and even graduate level classes? I am having a blast relearning and gaining a better grasp on the topics covered in college, but my curiosity for where else can all this math go is consuming me. Never the less, thank you for everything up to this point. You have done a great job with how this website runs to the content provided. God bless you all my friends.
• Khan Academy doesn't cover college-level math material beyond linear algebra and some differential equations.

MIT offers most of its course material for free through MIT OpenCourseWare, including some video lectures and notes.

• What is a candidate basis? I don't understand that term.
• Just think of it as a possible basis, an assumed basis, a suggested basis that might or might not be a real basis, "this candidate basis would probably be a basis, let's check if it really is".
• Sal said ''Column span'' but he probably meant ''Column space'' correct me if I'm wrong
• Column span and column space are the same thing. The column span would mean the span of the column vectors, which is exactly what the column space is.
• I am a bit confused about the following:
1. the proof that the non pivot colums can be made as lineair combinations of the pivot colums is made by rewriting the non pivot parts of the equation that sal writes at . Thats clear, but this can also be done for the pivot points.
2. In previous videos the proof is made that the pivot points cannot be written as lineair combinations of the other vectors by looking at the rref and trying to find a number with which you can multiply one of the vectors to get another. This cant also be done for the non pivots, because they also contain zeros.
I don`t why the above exclusively proves the lineair dependence or indepences of the vectors.
• 1. Consider a set of n linearly dependent vectors, of which any k taken alone are linearly independent. There are lots of ways to choose k vectors from n. But say we establish some rules, so that everyone chooses the same k vectors. These k vectors we then choose to call the linearly independent vectors for the set, and demonstrate that the remaining n - k vectors are linear combinations of the independent k (and thus dependent). Of course, we could have selected a different subset of k vectors, and they would have been independent as well (the remaining n - k in the selection being dependent). The point is - and this is important - that the subspace generated by any selection of k vectors from this set is the same. As far as I understand, the notion of linear dependence/independence was created solely for subspaces. So, if the subspace remains the same, then which k vectors get selected to produce it doesn't matter. So, why not make everyone select the same k vectors and avoid confusion. That's where the "pivot columns are independent and non-pivot columns are dependent vectors" rules comes in. We choose to treat pivot column vectors as independent, and thus disallow them being shown dependent. Hope this long paragraph made sense. :)

2. The proof made was that a pivot column vector cannot be written as a linear combination of other pivot column vectors, since each has a 1 in a different component of the vector, while the rest of their components are all 0. We cannot get 1 by any linear combination of 0's. For the non-pivot column vectors, this is not true.
• By reordering the columns in the matrix and reordering the x_1, x_2, etc to match, we can get any of the columns to be pivot or free columns, and so we can get any 3 column vectors of the matrix to be a basis correct? If this is correct, is it always the case?
• There will always be the same number of pivot columns/ variables. So once you have that many you can't have any more, so in that way you can't make "any" column a pivot column. you can make n columns pivot columns, where n is the number of pivot columns.

Another pice of the puzzle is linear independence. If two columns themselves are linearly dependent then only one of them can be a pivot column. for instance if one column was <1, 2, 3> and another was <2, 4, 6> only one of them would be a pivot column, no matter how you reordered the variables.

This continues, if two columns together are linearly dependent to a third column, you could never have all three, just two. of course it is difficult to tell which columns are linearly dependent before putting the matrix into rref.

I hope that made sense, but let me know if not. The main takeaway is that yes, there is some control you have, but if columns wind up being incompatible, that would be one limitation to the control you have. The most control you do have is the very first column you do, and you can guarantee that it will be a pivot column as long as there is at least one number in it that isn't 0
• This video confuses me a little. I recall the topic of nullity some video's back where Sal explicitly said that with a matrix B being [2x5] having 3 free variables, the basis was 3 vectors in R^5, not 2 column vectors of the original matrix B being represented in R^2 if i'm correct.

Can anybody help me out here, since the basis is supposed to always have the same amount of vectors. Or is this only valid for the amount of n-dimensions of R^n it spans?
• A basis is a set of vectors that span a subspace. Or, in other words, the combination of vectors can be used to get any value in a subspace.

for instance looking at the two vectors <1,0> and <0,1> you could get to any value in R2, so it is a basis for R2. For proof come up with some point (x,y) in R2 and then to get to that with just those two vectors use scalar multiplication x<1,0> + y<0,1> and you will get a vector that goes to that point.

Now for your example, if you could link the video hopefully I could clear up what he meant or an error in the video, you have some 2x5 matrix. this is 5 vectors of length 2. Say two of these vectors were what I mentioned in the last paragraph <1,0> and <0,1> You could use a combination of these two to get to any of the remaining three vectors, no matter what they are.

I think you are thinking of the null space. The basis of the null space is equal to the number of free variables. This number is also called the nullity.
• What is a tensor? And why pressure is a tensor?
(1 vote)
• A tensor is a special kind of n-dimensional array whose components transform from one coordinate system to another by applying the proper Jacobian to each index.

For someone new to tensors, that's probably gibberish, so let's break it down. A scalar value is a 0-dimensional array, a vector is a 1-dimensional array, a matrix is a 2-dimensional array, etc.. All 3 of these things can be tensors since tensors are n-dimensional arrays (where n is an integer ≥0).

Tensors are represented using upper and lower indices. An upper index represents a contra-variant part and a lower index represents a co-variant part. The number of indices is the order of the tensor. A vector could be represented as vi(subscript i) and a matrix could be represented as A^i j(i is upper, j is lower).

When swapping from one coordinate system to another, we need ways of equating their values. For example, if we wanted to switch from polar to Cartesian coordinates, we'd need the equivalences: x = r*cos(θ) and y = r*sin(θ). The Jacobians take these sorts of equivalences and put them into a single (very special) object. There's a co-variant and a contra-variant Jacobian for every change of coordinate system. In order to change the values of a tensor from one coordinate system to another, you apply the contravariant Jacobian to each contravariant index and the covariant Jacobian to each covariant index. Since not all objects can use the Jacobians to transform from one coordinate system to another, this is an important property of tensors which separates them from other n-dimensional arrays.

Tensors are very useful in physics and can describe just about any object, measurement, force, etc.. Pressure is a tensor because a) it can be described using an n-dimensional array, and b) the array has co- and contra-variant parts which transform according to the Jacobians.

I realize that wasn't a very thorough explanation, but there's a lot more underlying knowledge you need in order to fully understand tensors. I'd recommend this youtube series if you're interested in learning tensor calc: