If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Null space 3: Relation to linear independence

Understanding how the null space of a matrix relates to the linear independence of its column vectors. Created by Sal Khan.

## Want to join the conversation?

• at , we have an nxn matrix, whereas we start with matrix A that is mxn. Khan mentions that this has to be nxn, but I don't see why this A has transformed into nxn matrix and it has to be nxn. •  I think Sal got it wrong here. There could also be the case where m>n. But this would require rref(A) to have all rows below the nth row to be all zero. In this case the row vectors would be linearly dependent but the column vectors would be linearly independent (their span would be a subspace of R^m) and N(A)={0}

Response to other answers: A square matrix is the requirement for A BASIS. Linear Indepedance does not require a square matrix. So in a RREF matrix you can add rows of zeros because the columns remain linearly independent. In the nxn case the linearly independant columns span R^n (ie a basis) and in the mxn case (m>n) the linerly independant columns span a SUBSPACE of R^m. (not a basis of R^m). In both cases the null space has 0 as the unique solution.
• Can someone explain me the physical meaning of null space? I understand it mathematically but don't get it in a practical sense. Thanks. •  A 'physical meaning' would come from the meaning of the numbers in the matrix. Look up using matrices to solve Kirchhoff’s Current Law for one example where the nullspace is used for an applied problem. Otherwise, if you just want some intuition: the nullspace for a matrix A is the set of vectors that are perpendicular to all the rows of A.
• all that means that rref(A) = I , the identity matrix right? • Sal explains that the only way to the matrix vectors to be all linearly independent is if none of them is (may be represented as) a combination of the others. In which case the only solution is 0.

Then he says that for A.x = 0 to be true, x must be the zero vector. I guess I can make a conclusion by now, that the only way to satisfy the linear independency is if x = 0.

Is that wrong? Have I made any mistake in this thought? • Rodrigo,

You haven't made a mistake, this is correct.

Basically Ax is the same as the ith column of A times the ith term in x (or each column of A times it's respective x term). If the columns of A are a linearly independent set, then the only way to multiply them all by some coefficients, and then add them all together and STILL get zero is if all of the coefficients are zero. Well in this case, the terms of x act like the coefficients of the columns of A. For the whole thing to be equal to the zero vector, all of the x terms must be 0. Or the "nullspace of A" is ONLY the zero vector.
• what does nullspace mean, physically and mathematically ? • A Salman must have converted a matrix into a vector of vectors. This is confusing since I had always thought of matrices as composed of real numbers. In computer languages, such as Python there is a distinction between an "array" and a "list". A list can be a list of lists. But an array cannot be an array of arrays, well at least as far as I know. Can someone comment on what Salman did here? • So what is the actual purpose of null space? What does it signify? • The nullspace is the set of all vectors v that, when multiplied by some matrix in the form Av, the result is the zero-vector. This is useful because it tells you every single vector that, when multiplied with A, will result in 0. That's why they call it the "null" space, or the space of vectors that will "nullify" or "zero out" a matrix. I've seen it used in low-level UX design (opengl) and in games where 3d vector manipulation is baked in (like Second Life). Also note the exciting idea that the null space represents the set of all vectors that are orthogonal to the matrix row vectors (dot product = 0).
• At Sal starts talking about Linearly Independence.
What if an N by N matrix wasn't Linearly Independent?
(1 vote) • Then the zero vector could be obtained by choosing an x not equal to zero in the equation Ax = 0. For instance, if in R^3, you had a 3x3 matrix A that could be multiplied by a vector x (where x isn't [0,0,0]) and the product was the zero vector ([0,0,0]), then the null space of A isn't trivial* which implies that the columns of A (i.e. the 3 R^3 vectors) are linearly dependent. This means that one of the vectors could be written as a combination of the other two.

In essence, if the null space is JUST the zero vector, the columns of the matrix are linearly independent. If the null space has more than the zero vector, the columns of the matrix are linearly dependent.

* trivial null space is just the zero vector.
•  