# More on linear independence

## Video transcript

I think by now we have a reasonable sense of what linear dependence means. So let's just do a slightly more formal definition of linear dependence. So we're going to say that a set of vectors-- Let me just define my set of vectors. Let me call my set s of vectors, v1, v2, all the way to vn. I'm going to say that they are linearly dependent. If and only if. So sometimes it's written if, if with a lot of f's in there. So sometimes it's written if and only if. Sometimes it's shown like an arrow in two directions. If and only if I can satisfy this equation, I can find a set of constants c1 times v1. I can take a linear combination of my vectors all the way to cn vn, that satisfy the equation that I can create this into the 0 vector. Sometimes it's just written as a bold 0, and sometimes you could just write it -- I mean we don't know the dimensionality of this vector. It would be a bunch of 0's. We don't know how many actual elements are in each of these vectors, but you get the idea. My set of vectors is linearly dependent-- remember I'm saying dependent, not independent --is linearly dependant, if and only if I can satisfy this equation for some ci's where not all of them are equal to 0. This is key, not all are 0. Or you could say it the other way. You could say at least one is non-zero. So how does this gel with what we were talking about in the previous video where I said look, a set is linearly dependent if one of the of vectors can be represented by the combination of the other vectors? Let me write that down. In the last few I said, look, one vector can be -- Let me write it this way. One vector being represented by the some of the other vectors, I can just write it like this. I can write it a little bit more math-y. In the last video, I said that linear dependence means that-- let me just pick an arbitrary vector, v1. Let's say that v1, you know this is arbitrary, v1 one could be represented by some combination of the other vectors. Let me call them a1 times v -- let me be careful -- a2 times v2 plus a3 times v3 plus all the way up to an times vn. This is what we said in the previous video. If this is linear dependence, any one of these guys can be represented as some combination of the other ones. So how does this imply that? In order show this if and only if, I have to show that this implies that and I have to show that that implies this. So this is almost a trivially easy proof. Because if I subtract v1 from both sides of this equation I get 0 is equal to minus 1 v1 plus a2 v2 plus a3 v3 all the way to an vn. And clearly I've just said, well, this is linearly dependent. That means that I can represent this vector as a sum of the other vectors, which means that minus 1 times v1 plus some combination of the other vectors is equal to 0, which means that I've been able to satisfy this equation, and at least one of my constants is non-zero. So I've shown you that, if I can represent one of the vectors by a sum of the other ones, then this condition is definitely going to be true. Now let me go the other way. Let me show you if I have this situation that I can definitely represent one of the vectors as the sum of the others. So let's say that this is true. And one of these constants, remember it's not just this, it's at least one, is non-zero. So let me just assume, just for the sake of simplicity-- I mean these are all arbitrary. I'll do it in a new color. Let me do it in the magenta. Let me assume that c1 is not equal to 0. If c1 is not equal to 0, then I can divide both sides of this equation by c1. And what do I get? I get v1 plus c2 over c1 v2 plus all the way up to cn over c1 is equal to 0. And then I can multiply both sides of this, or I could add negative v1 to both sides of this equation or subtract v1 from both sides. And I get c2 over c1 v2 plus all the way up to cn over c1 vn-- there's a vn here --is equal to minus v1. Now if I just multiply both sides of this by negative 1, I get a minus, and all these become minuses and this becomes a plus. And I just showed you that if at least one of these constants is non-zero, that I can represent my vector v1 as some combination of the other vectors. So we're able to go this way too. If this condition is true, then I can represent one of the vectors as a combination of the others. If I can represent one of those vectors as a combination of the others, then this condition is true. Hopefully that kind of proves that these two definitions are equivalent. Maybe it's a little bit of overkill. Let's apply that definition now, to actually test. You might say, hey Sal, why'd you go through all of this effort? I went through all of this effort because this is actually a really useful way to test whether things are linearly independent or dependent. Let's try it out. Let's use our newly found tool. Let's say I have the set of vectors-- Let me do it up here. I want to be efficient with my space usage. So let's say I have the set of vectors 2,1 and 3,2. And my question to you is, are these linearly independent or are they linearly dependent? In order for them to be linearly dependent, that means that if some constant times 2,1 plus some other constant times this second vector, 3,2 where this should be equal to 0. Where these both aren't necessarily 0. Before I go up for this problem, let's remember what we're going to find out. If either of these are non-zero, if c1 or c2 are non-zero, then this implies that we are dealing with a dependent, linearly dependent set. If c1 and c2 are both 0, if the only way to satisfy this equation -- I mean you can always satisfy it by sitting everything equal to 0. But if the only way to satisfy it is by making both of these guys 0, then we're dealing with a linearly independent set. Let's try to do some math. And this'll just take us back to our Algebra 1 days. In order for this to be true, that means 2 times c1 plus 3 times c2 is equal to -- when I say this is equal to 0, it's really the 0 vector. I can rewrite this as 0,0. So 2 times c1 plus 3 times c2 would be equal to that 0 there. And then we'd have 1 times c1 plus 2 times c2 is equal to that 0. And now this is just a system, two equations, two unknowns. A couple of things we could do. Let's just multiply this top equation by 1/2. If you multiply it by 1/2 you get c1 plus 3/2 plus 3/2 c2 is equal to 0. And then if we subtract the green equation from the red equation this becomes 0. 2 minus 1 and 1/2-- 3/2 is 1 and 1/2 --of this is just 1/2 c2 is equal to 0. And this is easy to solve. c2 is equal to 0. So what's c1? Well, just substitute this back in. c2 is equal to 0. So this is equal to 0. So c1 plus 0 is equal to 0. So c1 is also equal to 0. We could have substituted it back into that top equation as well. So the only solution to this equation involves both c1 and c2 being equal to 0. So they both have to be 0. So this is a linearly independent set of vectors. Which means that neither of them are redundant of the other one. You can't represent one as a combination of the other. And since we have two vectors here, and they're linearly independent, we can actually know that this will span r2. The span of my r vectors is equal to r2. If one of these vectors was just some multiple of the other, than the span would have been some line within r2, not all of. But now I can represent any vector in r2 as some combination of those. Let's do another example. Let me scroll to the right, because sometimes this thing, when I go too far down, I haven't figured out why, when I go too far down it starts messing up. So my next example is the set of vectors. So I have the vector 2,1. I have the vector 3,2. And I have the vector 1,2. And I want to know are these linearly dependent or linearly independent. So I go to through the same drill. I use that little theorem that I proved at the beginning of this video. In order for them to be linearly dependent there must be some set of weights that I can multiply these guys. So c1 times this vector plus c2 times this vector plus c3 times that vector, that will equal the 0 vector. And if one of these is non-zero then we're dealing with a linearly dependent set of vectors. And if all of them are 0, then it's independent. Let's just do our linear algebra. So this means that 2 times c1 plus 3 times c2 plus c3 is equal to that 0 up there. And then if we do the bottom rows-- Remember when you multiply a scalar times a vector you multiply it by each of these terms. So c1 times 1. 1c1 plus 2c2 plus 2c3 is equal to 0. There's a couple of giveaways on this problem. If you have three two-dimensional vectors, one of them is going to be redundant. Because, in the very best case, even if you assume that that vector and that vector are linearly independent, then these would span r2. Which means that any point, any vector, in your two-dimensional space can be represented by some combination of those two. In which case, this is going to be one of them because this is just a vector in two-dimensional space. So it would be linearly dependent. And then, if you say, well, these aren't linearly independent then, they're just multiples of each other. In which case, this would definitely be a linearly dependent set. When you see three vectors that are each only vectors in r2, that are each two-dimensional vectors, it's a complete giveaway that this is linearly dependent. But I'm going to show it to you using our dependent, using our little theorem here. So I'm going to show you that I can get non-zero c3's, c2's, and c1's such that I can get a 0 here. If all of these had to be 0-- I mean you can always set them equal to 0. But if they had to be equal to 0, then it would be linearly independent. Let me just show you. I can just pick some random c3. Let me pick c3 to be equal to negative 1. So what would these two equations reduce to? I mean you have just three unknowns and two equations, it means you don't have enough constraints on your system. So if I just set c3-- I just pick that out of a hat. I could have picked c3 to be anything. But if I set c3 to be equal to negative 1, what do these equations become? You get 2c1 plus 3c2 minus 1 is equal to 0. And you get c1 plus 2c2 minus 2 is equal to 0. Right? 2 times minus 1. What can I do here? If I multiply this second equation by 2, what do I get? I get 2 plus 4c2 minus 4 is equal to 0. And now let's subtract this equation from that equation. So the c1's cancel out. 3c2 minus 4c2 is minus c2. And then minus 1 minus minus 4, so that's minus 1 plus 4. That's plus 3 is equal to 0. And so we get our -- Let me make sure I got that right. We have a minus 1 minus a minus 4. So plus 4. So we have a plus 3. So that is a minus 2. So minus c2 is equal to minus 3 or c2 is equal to 3. And if c2 is equal to 3 and c3 is equal to minus 1-- Let's just substitute here, so we get c1 plus 2 times c2, so plus 6, plus 2 times c3. So minus 2 is equal to 0. c1 plus 4 is equal to 0. c1 is equal to minus 4. I'm giving you a combination of c's that will give us a 0 vector. If I multiply minus 4 times our first vector, 2,1, that's c1, plus 3 times our second vector, 3,2 minus 1 times our third vector, 1,2 this should be equal to 0. Let's verify it just for fun. Minus 4 times 2 is minus 8 plus 9 minus 1. Yeah, that's minus 9 plus 9. That's 0. Minus 4 times plus 6 minus 2 that's also 0. So we've just shown a linear combination of these vectors, where actually none of the constants are 0. But all we had to show was that at least one of the constants had to be non-zero, and we actually showed all three of them were. But at least one of these had to be non-zero. And I was able to satisfy this equation, I was able to make them into the zero vector. So this shows, this proves, that this is a linearly dependent set of vectors. Which means one of the vectors is redundant. And you can never just say, oh, this is the redundant vector, because I can represent this as combination of those two. You could just as easily pick this guy as the redundant vector, and say, hey, I can represent this guy as the sum of those two. There's not one bad apple in the bunch. Any of them can be represented by the combination of some other, by all of the rest of them. So hopefully you have a better intuition of linear dependence and independence. Maybe I'll continue. I'll do a few more examples in the next video.