If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

More on linear independence

More examples determining linear dependence or independence. Created by Sal Khan.

Want to join the conversation?

  • leaf green style avatar for user Sebastian
    There is one thing I am trying to wrap my head around. At , Sal states that to show the linear dependence of a set of vectors you have to show that some weighted linear combination of the vectors of your set can equal the zero vector, where not all your scalar weights are zero, or otherwise stated, where at least one vector's weight has a non-zero value.

    So it got me thinking: In the case of a linear dependent set of vectors where the weights used in its linear combination to form the zero vector are all zero EXCEPT one of them, then that vector whose weight is non-zero must be the zero vector as well. Is this right ?
    (52 votes)
    Default Khan Academy avatar avatar for user
  • purple pi purple style avatar for user Chris Nguyen
    why did he put "iff" instead of "if"
    (2 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user Niki Mouzourakis
      "iff" is the shorthand for a biconditional, where a biconditonal is a sort of "2-way" if-then statement. i.e. Given that p and q are statements that can either be true or false, "p iff q" is the logical equivalent of saying "if p then q AND if q then p." The point of this is to have two theorems in one, so like paul said, instead of saying "p => q" AND "q => p," you can just say "p <=> q."
      (60 votes)
  • leafers ultimate style avatar for user RayMan
    In the end Sal explains that one cannot just pick out one "bad apple" from the group that makes a set linearly dependent. But what if I had this set and picked out the first one, wouldn't that make it linearly independent while choosing the last one would still keep it dependent?
    v1 = [2, 4, 5]
    v2 = [4, 8, 10]
    v3 = [1, 1, 1]
    keeping v1 and v2 would keep it linearly dependent because v2 is a scaled version of v1 or vice versa
    erasing v1 would allow for a plane to exist between v2 and v3
    erasing v2 would allow for a plane to exist between v1 and v3
    So therefore sometimes lists of vectors can have specific changes that do or do not affect dependence, right?
    (11 votes)
    Default Khan Academy avatar avatar for user
    • leafers ultimate style avatar for user L M
      Yes.

      Here are some examples using 3 vectors with 2 components each.

      Often times, any 2 you pick are independent, but the 3 together are dependent (this is true in the example Sal gives). Here is an easy example:
      v1 = [1, 0]
      v2 = [0, 1]
      v3 = [1, 1]
      There is no way to get v1 just by multiplying v2 or just by multiplying v3.
      There is no way to get v2 just by multiplying v1 or just by multiplying v3.
      There is no way to get v3 just by multiplying v1 or just by multiplying v2.
      any 2 you pick are independent

      But you can get any third one from the other two. The 3 together are dependent:
      v1 = v3 - v2
      v2 = v3 - v1
      v3 = v1 + v2

      So you could call any 1 of the 3 vectors the bad apple.

      However (as you said) there are also situations where the first 2 are dependent (similar to colinear), then you add a 3rd, unique one. An easy example:
      v1 = [1, 0]
      v2 = [2, 0]
      v3 = [0, 1]

      Here, v1 and v2 are dependent (both lie on the "x axis")
      v1 = 0.5 * v2
      v2 = 2 * v1

      But there is no way you can get v3 out of (by multiplying and adding) v1 and v2.

      So you could say that v3 is certainly not a 'bad apple'. But v1 or v2 can each be equally called a bad apple. So in this situation, although you can narrow it down to 2 vectors, you still can not call any individual the bad apple.

      Hope that was clear!
      (29 votes)
  • leaf green style avatar for user Clay Branch
    You solved the system of equations and found that it equalled zero. How did you know that that was the only way it could be solved? Couldn't there be another way to solve it that would lead to you discovering that it was linearly dependant?
    (11 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user mrpterpstra
      you find if a set is linearly independant, by showing that the "c"'s in the linear combination of the given set all have to be zero when making the zero vector.
      he didn't find it to equal the zero-vector, he just said we let's see what happins IF it equals the zero vector. if you find all the "c"'s are zero when making the zero vector, it's independant, if not it's dependant
      (11 votes)
  • blobby green style avatar for user d3pohar
    In the last example (), why is it valid to choose an arbitrary weight for c3 (or more generally, ci) in order to solve the two-equation system? I understand that we can't solve the system until we have only two unknowns b/c there are only two equations, but I don't understand why we can choose any number for c3 in order to "constrain" the system and make it solvable.
    (15 votes)
    Default Khan Academy avatar avatar for user
  • old spice man green style avatar for user Dariusz
    I was wondering if anyone has a good way of remembering the difference between dependance (weather something is dependent or independent). I understand the difference between them, but always seem to get the two confused (just the names of what they represent) unless I refer back to my notes.
    (6 votes)
    Default Khan Academy avatar avatar for user
    • male robot donald style avatar for user Jeremy
      Dariusz,
      You could try remembering it this way: if they are linearly DEPENDENT, than one of the vectors "depends" on the others, ( in the sense that it can be written as a linear combination of the others).

      For example, in R^2, if you have vector a, b, and c, we know that c can be written as some combination of a and b... (and a can be written as a combination of b and c, and b can be written as a combination of a and c). So c depends on a and b in some sense. It's not the most formal definition, but that may make it clear.

      Let's contrast that with some linearly independent set. Say vector a and b are in R^2, and we know they are independent, like [1,0] and [0,1]. a cannot be written as some combination of b nor vice versa. They are totally separate and INDEPENDENT.

      Does that help?
      (16 votes)
  • leaf green style avatar for user Shiriru
    Is a set containing the 0 vector always linearly dependent?

    For example:
    v1 = (0,0,0)
    v2 = (2,1,0)
    v3 = (1,4,3)
    I can solve c1 v1 + c2 v2 + c3 v3 = 0
    with c1 = 4, c2 = 0, c3 = 0
    At least one weight is non 0 => linearly dependent set?
    (11 votes)
    Default Khan Academy avatar avatar for user
  • male robot hal style avatar for user acha Zeng
    Sal said if the set of vectors are linearlly dependent , AT LEAST ONE of coefficient is non-zero . Think about the situation there is exactly 1 coefficient is non-zero , the only solution of c1v1 = 0 is v1 is a zero vector, does it mean the zero vector is linearlly dependent ?
    (6 votes)
    Default Khan Academy avatar avatar for user
    • mr pants teal style avatar for user Moon Bears
      Remember that linear dependence and independence is a property of sets of vectors, not vectors themselves! If v is a non-zero vector then the set {v} must be linearly independent. For a set of two vectors to be linearly dependent they must be colinear: let x, y be our vectors and a, b our scalars - then ax + by = 0 iff ax = - by. If a is non zero then x = -b/a y; likewise if b is non-zero you can solve. Therefore any set of two vectors is linearly dependent iff they are "co-linear" i.e. one is a scalar multiple of the other.
      (8 votes)
  • purple pi purple style avatar for user marvango
    At the end of the video, he said that any of the vectors could be "the bad apple" (the one that is redundant for them to span R2). But that's not necessarily true. If you have an î-vector, an ĵ-vector and an 2ĵ-vector, then one of the ĵ-vectors are the redundant one.

    If you remove the î-vector, you can no longer represent all of R2. Right?
    (5 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user MidnightStar312
    My memory of math is a bit fuzzy so this might be a stupid question. At , why did Sal randomly divide the equation 2C1 + 3C1 = 0 by one half?
    (4 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Sid
      He didn't divide by 1/2, he multiplied by 1/2.

      He did that to get the same coefficient on the C1 terms in both equations. After that, he can subtract one equation from the other to get rid of C1 altogether.

      Another possibility would have been to multiply the second equation by 2.
      (6 votes)

Video transcript

I think by now we have a reasonable sense of what linear dependence means. So let's just do a slightly more formal definition of linear dependence. So we're going to say that a set of vectors-- Let me just define my set of vectors. Let me call my set s of vectors, v1, v2, all the way to vn. I'm going to say that they are linearly dependent. If and only if. So sometimes it's written if, if with a lot of f's in there. So sometimes it's written if and only if. Sometimes it's shown like an arrow in two directions. If and only if I can satisfy this equation, I can find a set of constants c1 times v1. I can take a linear combination of my vectors all the way to cn vn, that satisfy the equation that I can create this into the 0 vector. Sometimes it's just written as a bold 0, and sometimes you could just write it -- I mean we don't know the dimensionality of this vector. It would be a bunch of 0's. We don't know how many actual elements are in each of these vectors, but you get the idea. My set of vectors is linearly dependent-- remember I'm saying dependent, not independent --is linearly dependant, if and only if I can satisfy this equation for some ci's where not all of them are equal to 0. This is key, not all are 0. Or you could say it the other way. You could say at least one is non-zero. So how does this gel with what we were talking about in the previous video where I said look, a set is linearly dependent if one of the of vectors can be represented by the combination of the other vectors? Let me write that down. In the last few I said, look, one vector can be -- Let me write it this way. One vector being represented by the some of the other vectors, I can just write it like this. I can write it a little bit more math-y. In the last video, I said that linear dependence means that-- let me just pick an arbitrary vector, v1. Let's say that v1, you know this is arbitrary, v1 one could be represented by some combination of the other vectors. Let me call them a1 times v -- let me be careful -- a2 times v2 plus a3 times v3 plus all the way up to an times vn. This is what we said in the previous video. If this is linear dependence, any one of these guys can be represented as some combination of the other ones. So how does this imply that? In order show this if and only if, I have to show that this implies that and I have to show that that implies this. So this is almost a trivially easy proof. Because if I subtract v1 from both sides of this equation I get 0 is equal to minus 1 v1 plus a2 v2 plus a3 v3 all the way to an vn. And clearly I've just said, well, this is linearly dependent. That means that I can represent this vector as a sum of the other vectors, which means that minus 1 times v1 plus some combination of the other vectors is equal to 0, which means that I've been able to satisfy this equation, and at least one of my constants is non-zero. So I've shown you that, if I can represent one of the vectors by a sum of the other ones, then this condition is definitely going to be true. Now let me go the other way. Let me show you if I have this situation that I can definitely represent one of the vectors as the sum of the others. So let's say that this is true. And one of these constants, remember it's not just this, it's at least one, is non-zero. So let me just assume, just for the sake of simplicity-- I mean these are all arbitrary. I'll do it in a new color. Let me do it in the magenta. Let me assume that c1 is not equal to 0. If c1 is not equal to 0, then I can divide both sides of this equation by c1. And what do I get? I get v1 plus c2 over c1 v2 plus all the way up to cn over c1 is equal to 0. And then I can multiply both sides of this, or I could add negative v1 to both sides of this equation or subtract v1 from both sides. And I get c2 over c1 v2 plus all the way up to cn over c1 vn-- there's a vn here --is equal to minus v1. Now if I just multiply both sides of this by negative 1, I get a minus, and all these become minuses and this becomes a plus. And I just showed you that if at least one of these constants is non-zero, that I can represent my vector v1 as some combination of the other vectors. So we're able to go this way too. If this condition is true, then I can represent one of the vectors as a combination of the others. If I can represent one of those vectors as a combination of the others, then this condition is true. Hopefully that kind of proves that these two definitions are equivalent. Maybe it's a little bit of overkill. Let's apply that definition now, to actually test. You might say, hey Sal, why'd you go through all of this effort? I went through all of this effort because this is actually a really useful way to test whether things are linearly independent or dependent. Let's try it out. Let's use our newly found tool. Let's say I have the set of vectors-- Let me do it up here. I want to be efficient with my space usage. So let's say I have the set of vectors 2,1 and 3,2. And my question to you is, are these linearly independent or are they linearly dependent? In order for them to be linearly dependent, that means that if some constant times 2,1 plus some other constant times this second vector, 3,2 where this should be equal to 0. Where these both aren't necessarily 0. Before I go up for this problem, let's remember what we're going to find out. If either of these are non-zero, if c1 or c2 are non-zero, then this implies that we are dealing with a dependent, linearly dependent set. If c1 and c2 are both 0, if the only way to satisfy this equation -- I mean you can always satisfy it by sitting everything equal to 0. But if the only way to satisfy it is by making both of these guys 0, then we're dealing with a linearly independent set. Let's try to do some math. And this'll just take us back to our Algebra 1 days. In order for this to be true, that means 2 times c1 plus 3 times c2 is equal to -- when I say this is equal to 0, it's really the 0 vector. I can rewrite this as 0,0. So 2 times c1 plus 3 times c2 would be equal to that 0 there. And then we'd have 1 times c1 plus 2 times c2 is equal to that 0. And now this is just a system, two equations, two unknowns. A couple of things we could do. Let's just multiply this top equation by 1/2. If you multiply it by 1/2 you get c1 plus 3/2 plus 3/2 c2 is equal to 0. And then if we subtract the green equation from the red equation this becomes 0. 2 minus 1 and 1/2-- 3/2 is 1 and 1/2 --of this is just 1/2 c2 is equal to 0. And this is easy to solve. c2 is equal to 0. So what's c1? Well, just substitute this back in. c2 is equal to 0. So this is equal to 0. So c1 plus 0 is equal to 0. So c1 is also equal to 0. We could have substituted it back into that top equation as well. So the only solution to this equation involves both c1 and c2 being equal to 0. So they both have to be 0. So this is a linearly independent set of vectors. Which means that neither of them are redundant of the other one. You can't represent one as a combination of the other. And since we have two vectors here, and they're linearly independent, we can actually know that this will span r2. The span of my r vectors is equal to r2. If one of these vectors was just some multiple of the other, than the span would have been some line within r2, not all of. But now I can represent any vector in r2 as some combination of those. Let's do another example. Let me scroll to the right, because sometimes this thing, when I go too far down, I haven't figured out why, when I go too far down it starts messing up. So my next example is the set of vectors. So I have the vector 2,1. I have the vector 3,2. And I have the vector 1,2. And I want to know are these linearly dependent or linearly independent. So I go to through the same drill. I use that little theorem that I proved at the beginning of this video. In order for them to be linearly dependent there must be some set of weights that I can multiply these guys. So c1 times this vector plus c2 times this vector plus c3 times that vector, that will equal the 0 vector. And if one of these is non-zero then we're dealing with a linearly dependent set of vectors. And if all of them are 0, then it's independent. Let's just do our linear algebra. So this means that 2 times c1 plus 3 times c2 plus c3 is equal to that 0 up there. And then if we do the bottom rows-- Remember when you multiply a scalar times a vector you multiply it by each of these terms. So c1 times 1. 1c1 plus 2c2 plus 2c3 is equal to 0. There's a couple of giveaways on this problem. If you have three two-dimensional vectors, one of them is going to be redundant. Because, in the very best case, even if you assume that that vector and that vector are linearly independent, then these would span r2. Which means that any point, any vector, in your two-dimensional space can be represented by some combination of those two. In which case, this is going to be one of them because this is just a vector in two-dimensional space. So it would be linearly dependent. And then, if you say, well, these aren't linearly independent then, they're just multiples of each other. In which case, this would definitely be a linearly dependent set. When you see three vectors that are each only vectors in r2, that are each two-dimensional vectors, it's a complete giveaway that this is linearly dependent. But I'm going to show it to you using our dependent, using our little theorem here. So I'm going to show you that I can get non-zero c3's, c2's, and c1's such that I can get a 0 here. If all of these had to be 0-- I mean you can always set them equal to 0. But if they had to be equal to 0, then it would be linearly independent. Let me just show you. I can just pick some random c3. Let me pick c3 to be equal to negative 1. So what would these two equations reduce to? I mean you have just three unknowns and two equations, it means you don't have enough constraints on your system. So if I just set c3-- I just pick that out of a hat. I could have picked c3 to be anything. But if I set c3 to be equal to negative 1, what do these equations become? You get 2c1 plus 3c2 minus 1 is equal to 0. And you get c1 plus 2c2 minus 2 is equal to 0. Right? 2 times minus 1. What can I do here? If I multiply this second equation by 2, what do I get? I get 2 plus 4c2 minus 4 is equal to 0. And now let's subtract this equation from that equation. So the c1's cancel out. 3c2 minus 4c2 is minus c2. And then minus 1 minus minus 4, so that's minus 1 plus 4. That's plus 3 is equal to 0. And so we get our -- Let me make sure I got that right. We have a minus 1 minus a minus 4. So plus 4. So we have a plus 3. So that is a minus 2. So minus c2 is equal to minus 3 or c2 is equal to 3. And if c2 is equal to 3 and c3 is equal to minus 1-- Let's just substitute here, so we get c1 plus 2 times c2, so plus 6, plus 2 times c3. So minus 2 is equal to 0. c1 plus 4 is equal to 0. c1 is equal to minus 4. I'm giving you a combination of c's that will give us a 0 vector. If I multiply minus 4 times our first vector, 2,1, that's c1, plus 3 times our second vector, 3,2 minus 1 times our third vector, 1,2 this should be equal to 0. Let's verify it just for fun. Minus 4 times 2 is minus 8 plus 9 minus 1. Yeah, that's minus 9 plus 9. That's 0. Minus 4 times plus 6 minus 2 that's also 0. So we've just shown a linear combination of these vectors, where actually none of the constants are 0. But all we had to show was that at least one of the constants had to be non-zero, and we actually showed all three of them were. But at least one of these had to be non-zero. And I was able to satisfy this equation, I was able to make them into the zero vector. So this shows, this proves, that this is a linearly dependent set of vectors. Which means one of the vectors is redundant. And you can never just say, oh, this is the redundant vector, because I can represent this as combination of those two. You could just as easily pick this guy as the redundant vector, and say, hey, I can represent this guy as the sum of those two. There's not one bad apple in the bunch. Any of them can be represented by the combination of some other, by all of the rest of them. So hopefully you have a better intuition of linear dependence and independence. Maybe I'll continue. I'll do a few more examples in the next video.