If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Matrix vector products as linear transformations

Matrix Vector Products as Linear Transformations. Created by Sal Khan.

## Want to join the conversation?

• How exactly are matrices used in computer science or physics? I mean yeah I heard that it is related to graphics in computer science and it is related to vector quantities in physics, but how do I exactly apply matrices to these? Someone please give an example either in computer science or physics and explain to me exactly how do we work with matrices. Thanks in advance! •   Peter,

I can give you a more in depth physics example.

Are you familiar with salt-water taffy? It's a piece of candy that is usually cylindrically shaped, about an inch long and maybe a quarter of an inch in diameter. And it's pretty "squishy" and "stretchy" kind of the consistency of play-doh.

Well, imagine you have a piece of taffy and you are holding it so that the long dimension is parallel to the ground. Now imagine that you pull both ends of it. What happens? Well, of course it gets longer in the direction that you are pulling it. But in the middle it also starts get skinny. The technical name for this is called "deformation" or "strain," and think of how many vectors it would take to describe the strain. It's getting longer in the direction you are pulling, so that would be one vector, but it's getting shorter in the other two directions. And of course, nothing says you have to pull perfectly along 1 axis. You could pull in some weird direction. So to describe a 3-dimensional strain it would actually take 9 values:

1 value describes how the x dimension changes due to a force in the x direction: Axx

1 value describes how the x dimension changes due to a force in the y direction:
Axy

1 value describes how the x dimension changes due to a force in the z direction:
Axz

And of course, there would be six more, for the other possibilities: Ayx, Ayy, Ayz, Azx, Azy, Azz.

If you put all of these together they are called the "Strain Tensor" and they can be arranged as a 2 dimensional 3 by 3 matrix. Using that matrix, you can calculate how much force it takes to stretch the taffy from 1 inch long to two inches long, for example, and then how much the taffy will "neck down" or get skinny in the middle.

Unfortunately, I can't give you more depth than that in a comment, but a quick google search will find you a lot more! Of course, this only 1 of a bunch of ways that matrices are used in applications.
• Also, some people (like myself) work much better with tangable objects than all these laws, rules and properties. If I could "see" Linear Transformations geometrically, graphed out and visualized, the theory would be much more digestible. • Oh wow, we just drop the results of the sin/cos/tan functions in the rotation matrix? Seems simple enough.

What I am confused about is in how we decided to use these specific trig functions....
that is
[Cos(theta) , -Sin(theta)]
[Sin(theta) , Cos(theta)]

I understand vertical V1 is multiplied by X and vertical V2 is multiplied by Y, but still don't see how they were built.

Does the "arrangement" the trig functions are in ever change (when doing rotations)? I guess I don't see how you arrived at that matrix so I'm taking you up on your offer :), that is, I'm confused on how you picked which trig functions to use in the matrix. I recognize the results of the trig functions fine (i'm more familiar with SOHCAHTOA aka hypSin(theta) or hypCos(theta) not xCos(theta) or -ySin(theta) ).

I see Wikipedia has a sheet on various R2 matrix calculations, I'm still lost as to how those Matrixes were derived, I hope you're more clear than Wiki as I mostly work in R3, and I will need to calculate rotations of Z as well.

I think the key lies in figureing out how to do any kind of transformations, not just rotations. It appears, that if for example in R2 that
[x transformation, y transformation]
[x transformation, y transformation]

Reading your response below, a R3 rotation would be described in a 4x4 matrix?
[x transformation, y transformation, z transformation, w transformation]
[x transformation, y transformation, z transformation w transformation]
[x transformation, y transformation, z transformation w transformation]
[x transformation, y transformation, z transformation w transformation]
• I would really like to see a demonstration on using Linear Transformations to describe a rotation and a relocation in a 3d space.

Would I need a 3x3 matrix to do that? A 3x4? All this theory is fine and well, but some examples on specific applications such as the ones mentioned above would be great. • If you had an object in 3D space, with a 3x3 matrix you can rotate, scale, stretch, flip, project. You cannot translate it (relocate). You don't need a 4x4 to translate. You could do that with a 3x4 as you suggest. A 3x4 would be very inconvenient though. As it isn't square, it wouldn't have an inverse. Quite often we want to do the opposite transform and the inverse matrix is handy in that it undoes the transformation. Another thing we want to do is combine transformations into 1 transformation. In the matrix world, we do this by multiplying the transformation matrices together. A 4x4 entity means they can be combined easily. The product of two 3x4 matrices on the other hand isn't even defined.

N.B. When using a 4x4 matrix, the 3D points are typically augmented with an extra coordinate we call w. w is typically set to 1. This augmentation is required to allow the product of a 4x4 matrix and a 4x1 vector to be defined.

I don't have any examples to point you at right now, but if I find some I'll edit this answer.
• At , the matrix multiplication he performs does not make sense to me. It lo0ks like at first he's treating v1, v2, v3... as the column vectors of matrix A, which would have dimension 1xm (causing it to have the expected mxn dimensions, as there are n vectors) , but then he multiplies them by the x vector, which is an nx1 matrix. You cannot perform matrix multiplication between a 1xm and an nx1 matrix. Am I overlooking something? • A has n vectors, which are each m x 1. So you can't multiply them by x as a vector (as x is n x 1), but that is not what is happening. He is multiplying them by the elements of x, so x1, x2 to xn, and then summing the result. Each element of x is just a scalar, which obviously can multiply the vector columns of A. This is just another way to go through the mechanics of multiplying.. using the elements of x as coefficients of the vectors of A, and it gives the same answer as doing it say the dot product way.
• Does every matrix A have a matrix B (where A != B), that Ax = y is equal to Bx = y?
For example, in Sal's 2x2 matrix [2, -1 <below> 3, 4], the matrix vector product Bx was equal to [2x.1 - x.2 <below> 3x.1 + 4x.2], whereas a = 2, b = -1, c = 3, and d = 4. However, if we had a new matrix A whereas its a = -x.2/x.1, b = 2x.1/x.2, c = 4x.2/x.1, and d = 3x.1/x.2, then, for any x.1 and x.2, Ax = Bx = y.
Is this right, and if so, what does it mean when you deal with matrix inverses? If you have C as an inverse, and you do Cy = x, does there exist many possible C's where Cy = x instead of only one C?

Thanks. • Not quite. What you have shown is that two different matrices can transform a specific vector to the same image. By making your "new matrix A" (matrix B) dependent on the vector this holds only for the specific vector. (Also notice that your new matrix falls apart if x_1 or x_2 = 0. I think if your construction does not work for x = [1 0] and x = [0 1], then you're looking for trouble.)

You should try a specific example for x_1 and x_2 != 0. You'll get two distinct matrices A != B that will transform your x_1 and x_2 to the same x_1' and x_2'. Yay! so far so good, but the two matrices, A and B will not transform a different x_1 and x_2 to the same image.

Consider for example that both a rotation and a reflection can take a specific vector to the same image, but will not take all vectors (the entire space) to the same image.

Some reflections transform specific vectors to the same vector, but that does not mean that they are the identity transformation.
• I have an extremely basic question ...
Is multiplying a matrix with a vector the same as multiplying a vector with a matrix (i.e. does the order matter?)
Sal says in the beginning of this video that "taking a product of a vector with a matrix is equivalent to a transformation" ... should that sentence be "taking a product of a matrix with a vector is equivalent to a transformation."

Sorry about nit-picking on possibly trivial elements ... it's because one does not know if something is important or not until one has fully surveyed the subject :) • On , why did Sal insist on writing a bold A?
I thought only vectors were bolded? • Why are we checking whether things are linear transformations? are there some perks to being linear?   