If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Showing that inverses are linear

Showing that inverse transformations are also linear. Created by Sal Khan.

## Want to join the conversation?

• : "best way to get rid of the T is to multiply by T^{-1} on both side of the equality"

No ! Use the fact that T is invertible and therefore bijective : if T(x) = T(y) then x = y • Sal hasn't used the word "bijective" yet in this series of videos, so he can't rely on that property to solve his problem. Therefore, based on the bed of knowledge that we've built so far taking the inverse transformation of both sides might actually, "be the best way to get rid of T." There may be simpler ways that Sal hasn't yet covered, such as you mention, but since they haven't yet been covered they aren't in our toolkit yet.
• If the first condition for linearity is satisfied, won't the second condition be satisfied too? If we know that the transformation of two added vectors is the same as their transformations added, then multiplying one vector by c could be seen as adding another vector to it also, right? • So what's the point of finding the inverse transformation? Is it simply to find the perpendicular line to our equation, or rather the equation of the original equation flipped over the y=x axis? • At about Sal says that we know T is a linear transformation (and it has to be linear to represent it as a matrix), and the whole video is based on finding out whether T inverse is also linear.

My question is, are there any non-linear transformations that are invertible? I have the sense that the transformation from x to y such that y=x^3 should be invertible, but does anyone know for sure? Or does invertibility imply linearity? • At ," I just change the associativity of this" . I think we cannot use associativity before we proof that T-inverse is linear transformation . Look forward to answers. Many thanks • That really wasn't associativity per se, more like the definition of the composition operation. But it is true that any mappings follow the associative rule whether they have any special properties or not. It's really beyond the scope of linear algebra, but if you know any abstract algebra, it follows from the fact that the set of all mappings from a set to itself is a monoid under the operation of commutation. Hope this helps!
• To add on to Fares comment: don't we only need the fact that T is injective to show that T(a) = T(b) implies a = b? • at does associativity only apply to linear transformations? • I'm not sure if this is a valid question,
But at , he expresses the sum of the vectors a and b, as the sum of the Identity Transformations of each individual vectors.
But we don't know for sure that a and b are members of the domain of T-1 right?
We assumed at the start that a+b is a member of the domain of T-1, so we used the identity transformation of it.
How did he do the same for each a and b?
(1 vote) • At 17.50, Sal "calls" the matrix [in matrix-vector product representing linear transformation of T^(-1)] as A^(-1). But, later relates it to A as its inverse. How did we, in the first place, know that linear transformation T^(-1)(x) will be represented by A^(-1) if transformation T(x) is being represented as Ax
(1 vote) • Sal assumes that in general T(x) (for some linear transformation T from R^m to R^n) = Ax (for some mxn matrix A of fixed real number coefficients). If you accept this as true, then:

For T:X->Y (X, Y in R^n), if T is invertible, call the inverse of T "Ti", Ti:Y->X. He shows that Ti is also a linear transformation, which means that it's also a matrix vector multiplication (Ti(y) = By, for some nxn matrix B). So let's just call B "Ai" (A inverse). Now T(x) = Ax and Ti(y) = (Ai)*y.
(1 vote)
• Regarding syntax, at , T^-1(b), b, should have a vector arrow above it, and at , Ax, should also have a vector arrow above the x?
(1 vote) 