If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Linear algebra

### Course: Linear algebra>Unit 1

Lesson 3: Linear dependence and independence

# More on linear independence

More examples determining linear dependence or independence. Created by Sal Khan.

## Want to join the conversation?

• There is one thing I am trying to wrap my head around. At , Sal states that to show the linear dependence of a set of vectors you have to show that some weighted linear combination of the vectors of your set can equal the zero vector, where not all your scalar weights are zero, or otherwise stated, where at least one vector's weight has a non-zero value.

So it got me thinking: In the case of a linear dependent set of vectors where the weights used in its linear combination to form the zero vector are all zero EXCEPT one of them, then that vector whose weight is non-zero must be the zero vector as well. Is this right ? •  I think your analysis is spot on. For a linearly dependent set of vectors that doesn't include the zero vector, there would need to be at least two terms in the combination to get the zero vector.
• why did he put "iff" instead of "if"
(1 vote) •   "iff" is the shorthand for a biconditional, where a biconditonal is a sort of "2-way" if-then statement. i.e. Given that p and q are statements that can either be true or false, "p iff q" is the logical equivalent of saying "if p then q AND if q then p." The point of this is to have two theorems in one, so like paul said, instead of saying "p => q" AND "q => p," you can just say "p <=> q."
• In the end Sal explains that one cannot just pick out one "bad apple" from the group that makes a set linearly dependent. But what if I had this set and picked out the first one, wouldn't that make it linearly independent while choosing the last one would still keep it dependent?
v1 = [2, 4, 5]
v2 = [4, 8, 10]
v3 = [1, 1, 1]
keeping v1 and v2 would keep it linearly dependent because v2 is a scaled version of v1 or vice versa
erasing v1 would allow for a plane to exist between v2 and v3
erasing v2 would allow for a plane to exist between v1 and v3
So therefore sometimes lists of vectors can have specific changes that do or do not affect dependence, right? •  Yes.

Here are some examples using 3 vectors with 2 components each.

Often times, any 2 you pick are independent, but the 3 together are dependent (this is true in the example Sal gives). Here is an easy example:
v1 = [1, 0]
v2 = [0, 1]
v3 = [1, 1]
There is no way to get v1 just by multiplying v2 or just by multiplying v3.
There is no way to get v2 just by multiplying v1 or just by multiplying v3.
There is no way to get v3 just by multiplying v1 or just by multiplying v2.
any 2 you pick are independent

But you can get any third one from the other two. The 3 together are dependent:
v1 = v3 - v2
v2 = v3 - v1
v3 = v1 + v2

So you could call any 1 of the 3 vectors the bad apple.

However (as you said) there are also situations where the first 2 are dependent (similar to colinear), then you add a 3rd, unique one. An easy example:
v1 = [1, 0]
v2 = [2, 0]
v3 = [0, 1]

Here, v1 and v2 are dependent (both lie on the "x axis")
v1 = 0.5 * v2
v2 = 2 * v1

But there is no way you can get v3 out of (by multiplying and adding) v1 and v2.

So you could say that v3 is certainly not a 'bad apple'. But v1 or v2 can each be equally called a bad apple. So in this situation, although you can narrow it down to 2 vectors, you still can not call any individual the bad apple.

Hope that was clear!
• You solved the system of equations and found that it equalled zero. How did you know that that was the only way it could be solved? Couldn't there be another way to solve it that would lead to you discovering that it was linearly dependant? • you find if a set is linearly independant, by showing that the "c"'s in the linear combination of the given set all have to be zero when making the zero vector.
he didn't find it to equal the zero-vector, he just said we let's see what happins IF it equals the zero vector. if you find all the "c"'s are zero when making the zero vector, it's independant, if not it's dependant
• In the last example (), why is it valid to choose an arbitrary weight for c3 (or more generally, ci) in order to solve the two-equation system? I understand that we can't solve the system until we have only two unknowns b/c there are only two equations, but I don't understand why we can choose any number for c3 in order to "constrain" the system and make it solvable. • Because if we find at least one solution with one of the Ci-s not equal to zero, then we sufficiently prove linear dependence of set. So if we set C3 in this example to some non-zero number and solve the equation for c1 and c2, then we give sufficient prove to the linear dependence.
• I was wondering if anyone has a good way of remembering the difference between dependance (weather something is dependent or independent). I understand the difference between them, but always seem to get the two confused (just the names of what they represent) unless I refer back to my notes. • Dariusz,
You could try remembering it this way: if they are linearly DEPENDENT, than one of the vectors "depends" on the others, ( in the sense that it can be written as a linear combination of the others).

For example, in R^2, if you have vector a, b, and c, we know that c can be written as some combination of a and b... (and a can be written as a combination of b and c, and b can be written as a combination of a and c). So c depends on a and b in some sense. It's not the most formal definition, but that may make it clear.

Let's contrast that with some linearly independent set. Say vector a and b are in R^2, and we know they are independent, like [1,0] and [0,1]. a cannot be written as some combination of b nor vice versa. They are totally separate and INDEPENDENT.

Does that help?
• Is a set containing the 0 vector always linearly dependent?

For example:
v1 = (0,0,0)
v2 = (2,1,0)
v3 = (1,4,3)
I can solve c1 v1 + c2 v2 + c3 v3 = 0
with c1 = 4, c2 = 0, c3 = 0
At least one weight is non 0 => linearly dependent set? • Sal said if the set of vectors are linearlly dependent , AT LEAST ONE of coefficient is non-zero . Think about the situation there is exactly 1 coefficient is non-zero , the only solution of c1v1 = 0 is v1 is a zero vector, does it mean the zero vector is linearlly dependent ? • Remember that linear dependence and independence is a property of sets of vectors, not vectors themselves! If v is a non-zero vector then the set {v} must be linearly independent. For a set of two vectors to be linearly dependent they must be colinear: let x, y be our vectors and a, b our scalars - then ax + by = 0 iff ax = - by. If a is non zero then x = -b/a y; likewise if b is non-zero you can solve. Therefore any set of two vectors is linearly dependent iff they are "co-linear" i.e. one is a scalar multiple of the other.  