If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Introduction to linear independence

Introduction to linear dependence and independence. Created by Sal Khan.

Want to join the conversation?

  • male robot hal style avatar for user Michael
    Khan says at that three co-planar vectors would be "linearly dependent." Wouldn't they be "planarly dependent", not "linearly dependent?" Doesn't "linear," by definition, mean "line", and "planar", plane?
    (45 votes)
    Default Khan Academy avatar avatar for user
    • male robot donald style avatar for user mahalnako
      The term to use is always "linearly" independent or dependent regardless how many dimensions are involved. I'm not a mathematician, but I am in the class Linear Algebra at college, and we use the same thing. I can say that the terms come from the concept of linear combination which is the addition of vectors in a vector space which are scaled (by multiplication). Sal defines a linear combination in the previous video and says that the reason for the word "linear" is that the focus is on this scaling that takes place - as in, the use of the scalar. I won't try to say I completely understand. To me it is just semantics. After doing enough of this, you're not really thinking of the word linear when you say linearly independent anyway. You're focused on whether or not the linear combination spans the vector space.

      It's an interesting question though.
      (74 votes)
  • blobby green style avatar for user William Barksdale
    Wait, so shouldn't the example with 3 vectors in R2 be linearly independent? because they are providing you with the directions you need to span R2 ?
    (14 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user kmleffler
      good question. at he partially addresses it. The third vector is unneeded as a basis for R2. Any set of two of those vectors, by the way, ARE linearly independent. Putting a third vector in to a set that already spanned R2, causes that set to be linearly dependent.
      (22 votes)
  • starky ultimate style avatar for user jdsutton
    Since you can span all of R^2 with only 2 vectors, does that mean that any set of 3 or more two-dimensional vectors will be linearly dependent?
    (14 votes)
    Default Khan Academy avatar avatar for user
    • leaf red style avatar for user Noble Mushtak
      Yes, since you can span all of R^2 with only 2 vectors, any set of 3 or more vectors in R^2 will be linearly independent! This is because you'll learn later that given any subspace, any basis of that subspace will have the same number of vectors (this number of vectors is called the dimensionality of the subspace) so any set of vectors from that subspace with more vectors than the dimensionality of the subspace will intuitively be linearly dependent (that might not make sense now, but by the time you finish this playlist, it probably will). Good job on figuring that out!
      (18 votes)
  • winston baby style avatar for user Andrew
    This may seem a no brainer, but what -is- a dimension, in the mathematical sense? It's one of those concepts that I understand (I think) in my head but cannot explicitly put into words. I can give examples of things in various dimensions, but I cannot yet explain what a dimension really is.
    (14 votes)
    Default Khan Academy avatar avatar for user
    • primosaur ultimate style avatar for user Derek M.
      If B is a basis for a vector space V, then the dimension of V is the number of vectors in the basis B.
      If you don't know what bases are yet, then an intuitive way to identify dimension of Vector spaces, is to count the number of entries in the vector. For example, R^4 is 4th dimensional because it has 4 entries, the vector space of all 5x6 matrices is 30th dimensional because there are 30 entries in a 5x6 matrix.
      (9 votes)
  • blobby green style avatar for user Konni Sunny
    says that span (v1,v2)=R^2, but V3 is linearly dependent, so i am assuming that the set{v1,v2,v3} is linearly dependent but the set{v1,v2} are linearly independent?...just clarifying??..so does this mean only 2 linearly independent vectors can span a vector space?
    (6 votes)
    Default Khan Academy avatar avatar for user
  • leaf red style avatar for user marechal
    Is it correct to say that for vectors to be linearly independent they must lie in different dimensions? Is it the only necessary condition?
    (6 votes)
    Default Khan Academy avatar avatar for user
  • male robot hal style avatar for user João Sombrio
    In case of 3 dimensions, how do I express (calculate) a span of a vector on a surface. Like, if the span is a surface, how do I find and express it?
    (5 votes)
    Default Khan Academy avatar avatar for user
    • leafers ultimate style avatar for user Joshua
      To express a plane, you would use a basis (minimum number of vectors in a set required to fill the subspace) of two vectors. The two vectors would be linearly independent. So the span of the plane would be span(V1,V2). To express where it is in 3 dimensions, you would need a minimum, basis, of 3 independently linear vectors, span(V1,V2,V3).
      (6 votes)
  • blobby green style avatar for user Donald Vespia
    Are the subjects covered by the videos on linear combinations, spans and linear dependence and independence pure math theory? Do they have application to the physical sciences, engineering or computer science?
    (4 votes)
    Default Khan Academy avatar avatar for user
    • primosaur ultimate style avatar for user Derek M.
      Yes they have applications. For example, linear independence is something that comes up when doing electric circuit analysis. My friend told me that in chemistry, they talk about linear combinations.
      Also, linear algebra as a whole is very useful for computer science, especially in graphics work.
      (3 votes)
  • leafers seed style avatar for user Soojin Cho
    What happens when linear independence is different between rows and columns?

    If I have three "column" vectors in R^4,
    1 1 0
    0 0 0
    1 0 1
    0 1 1

    I thought because there is a zero row vector, this collection would be linearly dependent.

    Why is it linearly independent...?


    Thank you!
    (2 votes)
    Default Khan Academy avatar avatar for user
    • primosaur ultimate style avatar for user Derek M.
      It is not true that if there is a zero row vector, then the collection is linearly dependent.
      Because here is a counter example:
      Consider
      1 0
      0 0
      0 1
      These column vectors are linearly independent, because one is the i unit vector, and the other one is k, and we know these vectors to be linearly independent (also notice the zero row).
      A zero row indicating linearly dependence is only true for matrices that are square, or matrices that are m by n, where m<n (i.e. more columns than rows).
      Really the simplest way to check if a set of vectors are linearly independent, is to put the vectors into a matrix, row reduce the matrix to echelon form, then the vectors are linearly independent if and only if there is a pivot in every column.
      (3 votes)
  • blobby green style avatar for user Sarthak Bhadviya
    at Sal said any point in R2 can be represented in the linear combination of V1 and V2 , and so span(v1,v2) is R2. I can never represent all random point in R2 in linear combination of v1 and v2. For example take vector 2,4.Here c1V1+c2V2 can not represent 2,4 so clearly span(v1,v2) is not R2 . Please help me understand.
    (1 vote)
    Default Khan Academy avatar avatar for user
    • cacteye blue style avatar for user Jerry Nilsson
      𝑐₁𝒗₁ + 𝑐₂𝒗₂ = (2, 4)

      With 𝒗₁ = (2, 3) and 𝒗₂ = (7, 2) we get
      𝑐₁⋅(2, 3) + 𝑐₂⋅(7, 2) = (2, 4)

      This gives us the system of equations
      R₁: 2𝑐₁ + 7𝑐₂ = 2
      R₂: 3𝑐₁ + 2𝑐₂ = 4

      Replacing R₂ with 2⋅R₂ − 3⋅R₁, we get
      2𝑐₁ + 7𝑐₂ = 2
      −17𝑐₂ = 2

      From the second equation we get 𝑐₂ = −2∕17

      Plugging that into the first equation, we get
      2𝑐₁ − 14∕17 = 2
      ⇒ 𝑐₁ = (2 + 14∕17)∕2 = 24∕17

      We check our work:
      24∕17⋅(2, 3) − 2∕17⋅(7, 2)
      = 1∕17⋅(48, 72) − 1∕17⋅(14, 4)
      = 1∕17⋅(34, 68)
      = (2, 4)
      (5 votes)

Video transcript

Let's say I had the set of vectors-- I don't want to do it that thick. Let's say one of the vectors is the vector 2, 3, and then the other vector is the vector 4, 6. And I just want to answer the question: what is the span of these vectors? And let's assume that these are position vectors. What are all of the vectors that these two vectors can represent? Well, if you just look at it, and remember, the span is just all of the vectors that can be represented by linear combinations of these. So it's the set of all the vectors that if I have some constant times 2 times that vector plus some other constant times this vector, it's all the possibilities that I can represent when I just put a bunch of different real numbers for c1 and c2. Now, the first thing you might realize is that, look, this vector 2, this is just the same thing as 2 times this vector. So I could just rewrite it like this. I could just rewrite it as c1 times the vector 2, 3 plus c2 times the vector-- and here, instead of writing the vector 4, 6, I'm going to write 2 times the vector 2, 3, because this vector is just a multiple of that vector. So I could write c2 times 2 times 2, 3. I think you see that this is equivalent to the 4, 6. 2 times 2 is 4. 2 times 3 is 6. Well, then we can simplify this a little bit. We can rewrite this as just c1 plus 2c2, all of that, times 2, 3, times our vector 2, 3. And this is just some arbitrary constant. It's some arbitrary constant plus 2 times some other arbitrary constant. So we can just call this c3 times my vector 2, 3. So in this situation, even though we started with two vectors, and I said, well, you know, the span of these two vectors is equal to all of the vectors that can be constructed with some linear combination of these, any linear combination of these, if I just use this substitution right here, can be reduced to just a scalar multiple of my first vector. And I could have gone the other way around. I could have substituted this vector as being 1/2 times this, and just made any combination of scalar multiple of the second vector. But the fact is, that instead of talking about linear combinations of two vectors, I can reduce this to just a scalar combination of one vector. And we've seen in R2 a scalar combination of one vector, especially if they're position vectors. For example, this vector 2, 3. It's 2, 3. It looks like this. All the scalar combinations of that vector are just going to lie along this line. So 2, 3, it's going to be right there. They're all just going to lie along that line right there, so along this line going in both directions forever. And if I take negative values of 2, 3, I'm going to go down here. If I take positive values, I'm going to go here. If I get really large positive values, it's going to go up here. But I can just represent the vectors, and when you put them in standard form, their arrows essentially would trace out this line. So you could say that the span of my set of vectors-- let me put it over here. The span of the set of vectors 2, 3 and 4, 6 is just this line here. Even though we have two vectors, they're essentially collinear. They're multiples of each other. I mean, if this is 2, 3, 4, 6 is just this right here. It's just that longer one right there. They're collinear. These two things are collinear. Now, in this case, when we have two collinear vectors in R2, essentially their span just reduces to that line. You can't represent some vector like-- let me do a new color. You can't represent this vector right there with some combination of those two vectors. There's no way to kind of break out of this line. So there's no way that you can represent everything in R2. So the span is just that line there. Now, a related idea to this, and notice, you had two vectors, but it kind of reduced to one vector when you took its linear combinations. The related idea here is that we call this set-- we call it linearly dependent. Let me write that down: linearly dependent. This is a linearly dependent set. And linearly dependent just means that one of the vectors in the set can be represented by some combination of the other vectors in the set. A way to think about it is whichever vector you pick that can be represented by the others, it's not adding any new directionality or any new information, right? In this case, we already had a vector that went in this direction, and when you throw this 4, 6 on there, you're going in the same direction, just scaled up. So it's not giving us any new dimension, letting us break out of this line, right? And you can imagine in three space, if you have one vector that looks like this and another vector that looks like this, two vectors that aren't collinear, they're going to define a kind of two-dimensional space. They can define a two-dimensional space. Let's say that this is the plane defined by those two vectors. In order to define R3, a third vector in that set can't be coplanar with those two, right? If this third vector is coplanar with these, it's not adding any more directionality. So this set of three vectors will also be linearly dependent. And another way to think about it is that these two purple vectors span this plane, span the plane that they define, essentially, right? Anything in this plane going in any direction can be-- any vector in this plane, when we say span it, that means that any vector can be represented by a linear combination of this vector and this vector, which means that if this vector is on that plane, it can be represented as a linear combination of that vector and that vector. So this green vector I added isn't going to add anything to the span of our set of vectors and that's because this is a linearly dependent set. This one can be represented by a sum of that one and that one because this one and this one span this plane. In order for the span of these three vectors to kind of get more dimensionality or start representing R3, the third vector will have to break out of that plane. It would have to break out of that plane. And if a vector is breaking out of that plane, that means it's a vector that can't be represented anywhere on that plane, so it's outside of the span of those two vectors. Where it's outside, it can't be represented by a linear combination of this one and this one. So if you had a vector of this one, this one, and this one, and just those three, none of these other things that I drew, that would be linearly independent. Let me draw a couple more examples for you. That one might have been a little too abstract. So, for example, if I have the vectors 2, 3 and I have the vector 7, 2, and I have the vector 9, 5, and I were to ask you, are these linearly dependent or independent? So at first you say, well, you know, it's not trivial. Let's see, this isn't a scalar multiple of that. That doesn't look like a scalar multiple of either of the other two. Maybe they're linearly independent. But then, if you kind of inspect them, you kind of see that v, if we call this v1, vector 1, plus vector 2, if we call this vector 2, is equal to vector 3. So vector 3 is a linear combination of these other two vectors. So this is a linearly dependent set. And if we were to show it, draw it in kind of two space, and it's just a general idea that-- well, let me see. Let me draw it in R2. There's a general idea that if you have three two-dimensional vectors, one of them is going to be redundant. Well, one of them definitely will be redundant. For example, if we do 2, 3, if we do the vector 2, 3, that's the first one right there. I draw it in the standard position. And I draw the vector 7, 2 right there, I could show you that any point in R2 can be represented by some linear combination of these two vectors. We can even do a kind of a graphical representation. I've done that in the previous video, so I could write that the span of v1 and v2 is equal to R2. That means that every vector, every position here can be represented by some linear combination of these two guys. Now, the vector 9, 5, it is in R2. It is in R2, right? Clearly. I just graphed it on this plane. It's in our two-dimensional, real number space. Or I guess we could call it a space or in our set R2. It's there. It's right there. So we just said that anything in R2 can be represented by a linear combination of those two guys. So clearly, this is in R2, so it can be represented as a linear combination. So hopefully, you're starting to see the relationship between span and linear independence or linear dependence. Let me do another example. Let's say I have the vectors-- let me do a new color. Let's say I have the vector-- and this one will be a little bit obvious-- 7, 0, so that's my v1, and then I have my second vector, which is 0, minus 1. That's v2. Now, is this set linearly independent? Is it linearly independent? Well, can I represent either of these as a combination of the other? And really when I say as a combination, you'd have to scale up one to get the other, because there's only two vectors here. If I am trying to add up to this vector, the only thing I have to deal with is this one, so all I can do is scale it up. Well, there's nothing I can do. No matter what I multiply this vector by, you know, some constant and add it to itself or scale it up, this term right here is always going to be zero. It's always going to be zero. So nothing I can multiply this by is going to get me to this vector. Likewise, no matter what I multiply this vector by, the top term is always going to be zero. So there's no way I could get to this vector. So both of these vectors, there's no way that you can represent one as a combination of the other. So these two are linearly independent. And you can even see it if we graph it. One is 7, 0, which is like that. Let me do it in a non-yellow color. 7, 0. And one is 0, minus 1. And I think you can clearly see that if you take a linear combination of any of these two, you can represent anything in R2. So the span of these, just to kind of get used to our notion of span of v1 and v2, is equal to R2. Now, this is another interesting point to make. I said the span of v1 and v2 is R2. Now what is the span of v1, v2, and v3 in this example up here? I already told you. I already showed you that this third vector can be represented as a linear combination of these two. It's actually just these two summed up. I can even draw it right here. It's just those two vectors summed up. So it clearly can be represented as a linear combination of those two. So what's its span? Well, the fact that this is redundant means that it doesn't change its span. It doesn't change all of the possible linear combinations. So its span is also going to be R2. It's just that this was more vectors than you needed to span R2. R2 is a two-dimensional space, and you needed two vectors. So this was kind of a more efficient way of providing a basis, and I haven't defined basis formally, yet, but I just want to use it a little conversationally, and then it'll make sense to you when I define it formally. This provides a better basis, or this provides a basis, kind of a non-redundant set of vectors that can represent R2. While this one, right here, is redundant. So it's not a good basis for R2. Let me give you one more example in three dimensions. And then in the next video, I'm going to make a more formal definition of linear dependence or independence. So let's say that I had the vector 2, 0, 0. Let me make a similar argument that I made up there: the vector 2, 0, 0, the vector 0, 1, 0, and the vector 0, 0, 7. We are now in R3, right? Each of these are three-dimensional vectors. Now, are these linear dependent or linearly independent? Sorry, are they linearly dependent or independent? Well, there's no way with some combination of these two vectors that I can end up with a non-zero term right here to make this third vector, right? Because no matter what I multiply this one by and this one by, this last term is going to be zero. So this is kind of adding a new direction to our set of vectors. Likewise, there's nothing I can do-- there's no combination of this guy and this guy that I can get a non-zero term here. And finally, no combination of this guy and this guy that I can get a non-zero term here. So this set is linearly independent. And if you were to graph these in three dimensions, you would see that none of these-- these three do not lie on the same plane. Obviously, any two of them lie on the same plane, but if you were to actually graph it, you get 2, 0. Let me say that that's x-axis. That's 2, 0, 0. Then you have this, 0, 1, 0. Maybe that's the y-axis. And then you have 0, 0, 7. It would look something like this. So it almost looks like, your three-dimensional axes, it almost looks like the vectors i, j, k. They're just scaled up a little bit. But you can always correct that by just scaling them down, right? Because we care about any linear combination of these. So the span of these three vectors right here, because they're all adding new directionality, is R3. Anyway, I thought I would leave you there in this video. I realize I've been making longer and longer videos, and I want to get back in the habit of making shorter ones. In the next video, I'm going to make a more formal definition of linear dependence, and we'll do a bunch more examples.