If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Matrix vector products as linear transformations

Matrix Vector Products as Linear Transformations. Created by Sal Khan.

## Want to join the conversation?

• at 6 minutes or so you say there are only 2 things a linear transformation must satisfy. preserve scalar multiplication and addition. what about the 0 vector mapping to the 0 vector in the new dimension? I thought that was one too. GREAT video though. Not trying to be a critic, just curious actually. Khan's academy is an awesome project. Keep it up! •   the 0 vector mapping to the 0 vector can be derived from the preservation of scalar multiplication and vector addition properties. Take T(0) = T(0 + 0) = T(0) + T(0) by definition of the addition property. Then you can subtract T(0) from both sides and you get T(0) = 0 as you wanted.
• How exactly are matrices used in computer science or physics? I mean yeah I heard that it is related to graphics in computer science and it is related to vector quantities in physics, but how do I exactly apply matrices to these? Someone please give an example either in computer science or physics and explain to me exactly how do we work with matrices. Thanks in advance! • Also, some people (like myself) work much better with tangable objects than all these laws, rules and properties. If I could "see" Linear Transformations geometrically, graphed out and visualized, the theory would be much more digestible. • Oh wow, we just drop the results of the sin/cos/tan functions in the rotation matrix? Seems simple enough.

What I am confused about is in how we decided to use these specific trig functions....
that is
[Cos(theta) , -Sin(theta)]
[Sin(theta) , Cos(theta)]

I understand vertical V1 is multiplied by X and vertical V2 is multiplied by Y, but still don't see how they were built.

Does the "arrangement" the trig functions are in ever change (when doing rotations)? I guess I don't see how you arrived at that matrix so I'm taking you up on your offer :), that is, I'm confused on how you picked which trig functions to use in the matrix. I recognize the results of the trig functions fine (i'm more familiar with SOHCAHTOA aka hypSin(theta) or hypCos(theta) not xCos(theta) or -ySin(theta) ).

I see Wikipedia has a sheet on various R2 matrix calculations, I'm still lost as to how those Matrixes were derived, I hope you're more clear than Wiki as I mostly work in R3, and I will need to calculate rotations of Z as well.

I think the key lies in figureing out how to do any kind of transformations, not just rotations. It appears, that if for example in R2 that
[x transformation, y transformation]
[x transformation, y transformation]

Reading your response below, a R3 rotation would be described in a 4x4 matrix?
[x transformation, y transformation, z transformation, w transformation]
[x transformation, y transformation, z transformation w transformation]
[x transformation, y transformation, z transformation w transformation]
[x transformation, y transformation, z transformation w transformation]
• I would really like to see a demonstration on using Linear Transformations to describe a rotation and a relocation in a 3d space.

Would I need a 3x3 matrix to do that? A 3x4? All this theory is fine and well, but some examples on specific applications such as the ones mentioned above would be great. • If you had an object in 3D space, with a 3x3 matrix you can rotate, scale, stretch, flip, project. You cannot translate it (relocate). You don't need a 4x4 to translate. You could do that with a 3x4 as you suggest. A 3x4 would be very inconvenient though. As it isn't square, it wouldn't have an inverse. Quite often we want to do the opposite transform and the inverse matrix is handy in that it undoes the transformation. Another thing we want to do is combine transformations into 1 transformation. In the matrix world, we do this by multiplying the transformation matrices together. A 4x4 entity means they can be combined easily. The product of two 3x4 matrices on the other hand isn't even defined.

N.B. When using a 4x4 matrix, the 3D points are typically augmented with an extra coordinate we call w. w is typically set to 1. This augmentation is required to allow the product of a 4x4 matrix and a 4x1 vector to be defined.

I don't have any examples to point you at right now, but if I find some I'll edit this answer.
• At , the matrix multiplication he performs does not make sense to me. It lo0ks like at first he's treating v1, v2, v3... as the column vectors of matrix A, which would have dimension 1xm (causing it to have the expected mxn dimensions, as there are n vectors) , but then he multiplies them by the x vector, which is an nx1 matrix. You cannot perform matrix multiplication between a 1xm and an nx1 matrix. Am I overlooking something? • A has n vectors, which are each m x 1. So you can't multiply them by x as a vector (as x is n x 1), but that is not what is happening. He is multiplying them by the elements of x, so x1, x2 to xn, and then summing the result. Each element of x is just a scalar, which obviously can multiply the vector columns of A. This is just another way to go through the mechanics of multiplying.. using the elements of x as coefficients of the vectors of A, and it gives the same answer as doing it say the dot product way.
• I have an extremely basic question ...
Is multiplying a matrix with a vector the same as multiplying a vector with a matrix (i.e. does the order matter?)
Sal says in the beginning of this video that "taking a product of a vector with a matrix is equivalent to a transformation" ... should that sentence be "taking a product of a matrix with a vector is equivalent to a transformation."

Sorry about nit-picking on possibly trivial elements ... it's because one does not know if something is important or not until one has fully surveyed the subject :) • Does every matrix A have a matrix B (where A != B), that Ax = y is equal to Bx = y?
For example, in Sal's 2x2 matrix [2, -1 <below> 3, 4], the matrix vector product Bx was equal to [2x.1 - x.2 <below> 3x.1 + 4x.2], whereas a = 2, b = -1, c = 3, and d = 4. However, if we had a new matrix A whereas its a = -x.2/x.1, b = 2x.1/x.2, c = 4x.2/x.1, and d = 3x.1/x.2, then, for any x.1 and x.2, Ax = Bx = y.
Is this right, and if so, what does it mean when you deal with matrix inverses? If you have C as an inverse, and you do Cy = x, does there exist many possible C's where Cy = x instead of only one C?

Thanks. • Not quite. What you have shown is that two different matrices can transform a specific vector to the same image. By making your "new matrix A" (matrix B) dependent on the vector this holds only for the specific vector. (Also notice that your new matrix falls apart if x_1 or x_2 = 0. I think if your construction does not work for x = [1 0] and x = [0 1], then you're looking for trouble.)

You should try a specific example for x_1 and x_2 != 0. You'll get two distinct matrices A != B that will transform your x_1 and x_2 to the same x_1' and x_2'. Yay! so far so good, but the two matrices, A and B will not transform a different x_1 and x_2 to the same image.

Consider for example that both a rotation and a reflection can take a specific vector to the same image, but will not take all vectors (the entire space) to the same image.

Some reflections transform specific vectors to the same vector, but that does not mean that they are the identity transformation.
• Why are we checking whether things are linear transformations? are there some perks to being linear? • Linear transformations are the simplest, and cover a very wide range of possible transformations of vectors. On the other hand, non-linear transformations do not work very well if you change your coordinate grid, making them very rare. But the main reason is that a linear transformation can always be represented as a matrix-vector product, which allows some neat simplifications.  