If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Linear transformations as matrix vector products

Showing how ANY linear transformation can be represented as a matrix vector product. Created by Sal Khan.

## Want to join the conversation?

• I believe I have watched all videos up to this point and this is the first one that has confused me. It seems to jump straight into transforming a matrix whilst the overall subject has been transforming vectors. Whilst I understand that the matrix can be considered as a collection of column (or row) vectors it doesn't explain the apparent jump to matrix transformation in the usual thorough way. So whilst we started of transforming a vector we appear to have transformed a collection of vectors and used the result to transform the vector! What does the transformed matrix of basis vectors (the transformed I matrix) represent?
• It is not actually the matrix that you transform nor the column vectors of the matrix, it is the vector that you transform by multiplying it by the matrix
• If any matrix-vector multiplication is a linear transformation then how can I interpret the general linear regression equation? `y = X β`.
X is the design matrix, β is a vector of the model's coefficients (one for each variable), and y is the vector of predicted outputs for each object.
Let's say X is a 100x2 matrix and β is a 2x1. Then y is a 100x1 matrix.

The concept is clear but from a Linear Transformation point of view what doest it mean?

I take a vector of coefficients in Rˆ2 and through X I transform it into a
vector in Rˆ100. I can't visualize it logically...🤔
• A 100x2 matrix is a transformation from 2-dimensional space to 100-dimensional space. So the image/range of the function will be a plane (2D space) embedded in 100-dimensional space. So each vector in the original plane will now also be embedded in 100-dimensional space, and hence be expressed as a 100-dimensional vector.
• I'm confused as to what is being taught here. Is the the lesson saying that the transformation of a vector is equivalent to transforming the original basis and then using the result to transform the vector?
• Yes, the basis for R^2 into a 2x3 matrix using the equations above.
• so, can i just arrange the linear 'instructions' in ascending order of the components of vector x take their coefficients of each term and plug it in to the matrix thats to be multiplied by the x vector ?? seems like a pretty legit shortcut now that i have an intuitive understanding of it
• I agree. The use of the identity matrix is unnecessary. The coefficients of the matrix are directly taken from the transformation specification.
• what do you call that matrix? Is it the standard matrix?
• That two column one? THere's no proper name really, but it represents a transformation matrix, since you could multiply a vector by it for the transformation shown in the video. Let em know if that didn't answer your question.
• In the statement, from previous discussion I think "The sum equal to the sum of their transformation: " x1T(e1) + x2T(e2) + ...+ xnT(en)
should be written this way: e1T(x1) + e2T(x2) +...+ enT(xn). Can you explain why you put it that way and not like the way I thought it ought be written? Thanks
• This series have been helping me a lot and I am thankful for it, but this one made me so confused to the point I got a little desperate.