- A more formal understanding of functions
- Vector transformations
- Linear transformations
- Visualizing linear transformations
- Matrix from visual representation of transformation
- Matrix vector products as linear transformations
- Linear transformations as matrix vector products
- Image of a subset under a transformation
- im(T): Image of a transformation
- Preimage of a set
- Preimage and kernel example
- Sums and scalar multiples of linear transformations
- More on matrix addition and scalar multiplication
Introduction to linear transformations. Created by Sal Khan.
Want to join the conversation?
- When my teacher says to learn Transformaions: reflections, rotations, translations, dilations.... is this the video I should be watching for that or is Linear transformations something different? if it is could you tell me what that video is called so I can look it up? Thank you so much.. confused a bit here =P(18 votes)
- These linear transformations are probably different from what your teacher is referring to; while the transformations presented in this video are functions that associate vectors with vectors, your teacher's transformations likely refer to actual manipulations of functions.
Unfortunately, Khan doesn't seem to have any videos for transformations, reflections, etc. in his algebra playlist, but the links below might be useful.
Paul's Online Notes: http://tutorial.math.lamar.edu/Classes/Alg/Transformations.aspx
Hope this helps and good luck!
- Is another name for this 'linear mappings'?(13 votes)
- Simple question, (apologies if answered, I'm about 1/2 way through), but, what exactly does "Linear" mean. I understand that it meets those three criterion, but say, in a very abstract sense (and hopefully in laymen's terms), what does it mean? Perhaps it implies continuity? Perhaps it means the transformation won't enter the domain of complex numbers?
Also, can you name a condition or two where 'linearity', that is, the criterion will consistently broken?
I hope I'm clear on the type of answer I'm looking for. Thanks,
EDIT: With a little inductive reasoning, it appears that if a translation is NOT linear, something is being lost or gained either when either the vectors are added together and then transformed, or something is lost or gained when they are transformed then added together.
I guess that something would be lost in transformation, not addition, so if information is lost in transformation then it would still be lost when they are then added together; thus giving a different.
I guess I answered my own question =D
You mentioned squares and exponents. Curious, something inherent in either transforming or adding either squares or exponents is causing a loss of information.
Care to take this logic further?(12 votes)
- The textbook definition of linear is: "progressing from one stage to another in a single series of steps; sequential." Which makes sense because if we are transforming these matrices linearly they would follow a sequence based on how they are scaled up or down.(6 votes)
- Why do we need to have two conditions here?
Isn't the vector addition enough? After all, if you can add vector a and a scalar times vector a, then this is the same thing as just multiplying the vector by that scalar + 1, isn't it?(6 votes)
- But how would we get a scalar like 1.1 from just adding a vector with itself, or pi for that matter?
This is a great question, and one I used to ask myself. Ultimately, there examples of transformations that satisfy vector addition, but not scalar multiplication, so both conditions for linearity are in fact necessary.(5 votes)
- It would be good if there were more practice problems and quizzes on this unit. It is hard to keep track of all this information without applying it.(7 votes)
- In order for it to be a linear transformation doesn't zero vector have to satisfy the parameters as well? If it is how come it wasn't in the video?(3 votes)
- Let v be an arbitrary vector in the domain. Then T( 0 ) = T( 0 * v ) = 0 * T( v ) = 0. So you don't need to make that a part of the definition of linear transformations since it is already a condition of the two conditions.(3 votes)
- At4:23, Sal said that component of a vector is scalar..but component of a vector also have their direction (like component along x axis or so)..right? so in that way component of a vector should also be vector, i think...! Well, m confused..plz help..and sorry for the silly out of context question.... :)(2 votes)
- Well, strictly speaking component of a vector, that is just what is written inside a vector cell is a scalar, it has no information in which cell it was written. What you are talking about is vector decomposition, i.e. representing a vector as a sum of axis-aligned vectors, consider example, given vector
v = (3, 4, 5)
it has scalar components - just numbers 3, 4, and 5
it could also be decomposed into a sum, like this
vx = (3, 0, 0)
vy = (0, 4, 0)
vz = (0, 0, 5)
v = vx + vy + vz
look up 'vector basis'(3 votes)
- Is there a third property of a transformation being linear: T(0) = 0? I can't think of when this wouldn't be the case, unless there's a constant in the transformation without a variable.. Wanted to confirm if this is a property or not... Thanks.(3 votes)
- Sal can we find a linear transformation by knowing the basis of its kernal?(2 votes)
- No. Knowing the kernel tells us which basis vectors are sent to 0, but the remaining basis vectors could still be sent anywhere.(3 votes)
- At13:25, Sal mentions that if you're dealing with a linear transformation that involves only a linear combination of different components of inputs, you're "probably" dealing with a linear transformation. But if we're talking about a "linear combination" of components, wouldn't it ALWAYS be a linear transformation?? If not, can someone give an example where a linear combination of components leads to a Non-linear transformation?(1 vote)
- Unfortunately LaTeX does not work in these comment boxes, as otherwise I could have shown you my proof that any transformation consisting of linear combinations is also a linear transformation. Simply put (just to explain the concepts of what would need to be included in the proof), we know that any combination of vectors can be expressed as another vector. Similarly, any combination of constants results in one bigger constant. This means that we can , by proving that T(vector a) = [c1*a1, c2*a2, ..., cn*an] is L.T. for any vector a and for any series of constants c1, c2, ... cn, prove that any transformation including only linear combinations is a transformation that is L.T.
To make the "if T consists of linear combinations of vectors and constants" an "iff ~", all we need to do now is to prove that for any non-linear combination of vectors, but not constants (c1*c2 is still C and thus does not give a different result about whether T is L.T., whatever that result may be).
I made a small mistake by first not seeing that linear combination ONLY involves vectors. Constants are out of the question, therefore I should not have spoken about "linear combinations of vectors >>and constants<<".
I hope that shows that the proof is quite simple, so it is certainly not impossible, even quite easy, to state that all examples will work, as there is a simple proof covering every possible linear combination without loss of generality, by making a few simple but key (and easily forgotten) lemmas and assumptions.(3 votes)
You now know what a transformation is, so let's introduce a special kind of transformation called a linear transformation. It only makes sense that we have something called a linear transformation because we're studying linear algebra. We already had linear combinations so we might as well have a linear transformation. And a linear transformation, by definition, is a transformation-- which we know is just a function. We could say it's from the set rn to rm -- It might be obvious in the next video why I'm being a little bit particular about that, although they are just arbitrary letters -- where the following two things have to be true. So something is a linear transformation if and only if the following thing is true. Let's say that we have two vectors. Say vector a and let's say vector b, are both members of rn. So they're both in our domain. So then this is a linear transformation if and only if I take the transformation of the sum of our two vectors. If I add them up first, that's equivalent to taking the transformation of each of the vectors and then summing them. That's my first condition for this to be a linear transformation. And the second one is, if I take the transformation of any scaled up version of a vector -- so let me just multiply vector a times some scalar or some real number c . If this is a linear transformation then this should be equal to c times the transformation of a. That seems pretty straightforward. Let's see if we can apply these rules to figure out if some actual transformations are linear or not. So let me define a transformation. Let's say that I have the transformation T. Part of my definition I'm going to tell you, it maps from r2 to r2. So if you give it a 2-tuple, right? Its domain is 2-tuple. So you give it an x1 and an x2 let's say it maps to, so this will be equal to, or it's associated with x1 plus x2. And then let's just say it's 3 times x1 is the second tuple. Or we could have written this more in vector form. This is kind of our tuple form. We could have written it -- and it's good to see all the different notations that you might encounter -- you could write it a transformation of some vector x, where the vector looks like this, x1, x2. Let me put a bracket there. It equals some new vector, x1 plus x2. And then the second component of the new vector would be 3x1. That's a completely legitimate way to express our transformation. And a third way, which I never see, but to me it kind of captures the essence of what a transformation is. It's just a mapping or it's just a function. We could say that the transformation is a mapping from any vector in r2 that looks like this: x1, x2, to-- and I'll do this notation-- a vector that looks like this. x1 plus x2 and then 3x1. All of these statements are equivalent. But our whole point of writing this is to figure out whether T is linearly independent. Sorry, not linearly independent. Whether it's a linear transformation. I was so obsessed with linear independence for so many videos, it's hard to get it out of my brain in this one. Whether it's a linear transformation. So let's test our two conditions. I have them up here. So let's take T of, let's say I have two vectors a and b. They're members of r2. So let me write it. A is equal to a1, a2, and b is equal to b1, b2. Sorry that's not a vector. I have to make sure that those are scalars. These are the components of a vector. And b2. So what is a1 plus b? Sorry, what is vector a plus vector b? Brain's malfunctioning. All right. Well, you just add up their components. This is the definition of vector addition. So it's a1 plus b1. Add up the first components. And the second components is just the sum of each of the vector's second compnents. a2 plus b2. Nothing new here. But what is the transformation of this vector? So the transformation of vector a plus vector b, we could write it like this. That would be the same thing as the transformation of this vector, which is just a1 plus b1 and a2 plus b2. Which we know it equals a vector. It equals this vector. Or what we do is for the first component here, we add up the two components on this side. So the first component here is going to be these two guys added up. So it's a1 plus a2 plus b1 plus b2. And then the second component by our transformation or function definition is just 3 times the first component in our domain, I guess you could say. So it's 3 times the first one. So it's going to be 3 times this first guy. So it's 3a1 plus 3b1. Fair enough. Now what is the transformation individually of a and b? So the transformation of a is equal to the transformation of a -- let me write it this way -- is equal to the transformation of a1 a2 in brackets. That's another way of writing vector a. And what is that equal to? That's our definition of our transformation right up here, so this is going to be equal to the vector a1 plus a2 and then 3 times a1. It just comes straight out of the definition. I essentially just replaced an x with a's. By the same argument, what is the transformation of our vector b? Well, it's just going to be the same thing with the a's replaced by the b's. So the transformation of our vector b is going to be -- b is just b1 b2 -- so it's going to be b1 plus b2. And then the second component in the transformation will be 3 times b1. Now, what is the transformation of vector a plus the transformation of vector b? Well, it's this vector plus that vector. And what is that equal to? Well, this is just pure vector addition so we just add up their components. So it's a1 plus a2 plus b1 plus b2. That's just that component plus that component. The second component is 3a1 and we're going to add it to that second component. So it's 3a1 plus 3b1. Now, we just showed you that if I take the transformations separately of each of the vectors and then add them up, I get the exact same thing as if I took the vectors and added them up first and then took the transformation. So we've met our first criteria. That the transformation of the sum of the vectors is the same thing as the sum of the transformations. Now let's see if this works with a random scalar. So we know what the transformation of a looks like. What does ca look like, first of all? I guess that's a good place to start. c times our vector a is going to be equal to c times a1. And then c times a2. That's our definition of scalar multiplication time's a vector. So what's our transformation -- let me go to a new color. What is our -- let me do a color I haven't used in a long time, white. What is our transformation of ca going to be? Well, that's the same thing as our transformation of ca1, ca2 which is equal to a new vector, where the first term -- let's go to our definition -- is you sum the first and second components. And then the second term is 3 times the first component. So our first term you sum them. So it's going to be ca1 plus ca2. And then our second term is 3 times our first term, so it's 3ca1. Now, what is this equal to? This is the same thing. We can view it as factoring out the c. This the same thing as c times the vector a1 plus a2. And then the second component is 3a1. But this thing right here, we already saw. This is the same thing as the transformation of a. So just like that, you see that the transformation of c times our vector a, for any vector a in r2 -- anything in r2 can be represented this way -- is the same thing as c times the transformation of a. So we've met our second condition, that when you when you -- well I just stated it, so I don't have to restate it. So we meet both conditions, which tells us that this is a linear transformation. And you might be thinking, OK, Sal, fair enough. How do I know that all transformations aren't linear transformations? Show me something that won't work. And here I'll do a very simple example. Let me define my transformation. Well, I'll do it from r2 to r2 just to kind of compare the two. I could have done it from r to r if wanted a simpler example. But I'm going to define my transformation. Let's say, my transformation of the vector x1, x2. Let's say it is equal to x1 squared and then 0, just like that. Let me see if this is a linear transformation. So the first question is, what's my transformation of a vector a? So my transformation of a vector a-- where a is just the same a that I did before-- it would look like this. It would look like a1 squared and then a 0. Now, what would be my transformation if I took c times a? Well, this is the same thing as c times a1 and c times a2. And by our transformation definition -- sorry, the transformation of c times this thing right here, because I'm taking the transformation on both sides. And by our transformation definition this will just be equal to a new vector that would be in our codomain, where the first term is just the first term of our input squared. So it's ca1 squared. And the second term is 0. What is this equal to? Let me switch colors. This is equal to c squared a1 squared and this is equal to 0. Now, if we can assume that c does not equal 0, this would be equal to what? Actually, it doesn't even matter. We don't even have to make that assumption. So this is the same thing. This is equal to c squared times the vector a1 squared 0. Which is equal to what? This expression right here is a transformation of a. So this is equal to c squared times the transformation of a. Let me do it in the same color. So what I've just showed you is, if I take the transformation of a vector being multiplied by a scalar quantity first, that that's equal to -- for this T, for this transformation that I've defined right here -- c squared times the transformation of a. And clearly this statement right here, or this choice of transformation, conflicts with this requirement for a linear transformation. If I have a c here I should see a c here. But in our case, I have a c here and I have a c squared here. So clearly this negates that statement. So this is not a linear transformation. And just to get a gut feel if you're just looking at something, whether it's going to be a linear transformation or not, if the transformation just involves linear combinations of the different components of the inputs, you're probably dealing with a linear transformation. If you start seeing things where the components start getting multiplied by each other or you start seeing squares or exponents, you're probably not dealing with a linear transformation. And then there's some functions that might be in a bit of a grey area, but it tends to be just linear combinations are going to lead to a linear transformation. But hopefully that gives you a good sense of things. And this leads up to what I think is one of the neatest outcomes, in the next video.