Main content
Course: Linear algebra > Unit 2
Lesson 1: Functions and linear transformations- A more formal understanding of functions
- Vector transformations
- Linear transformations
- Visualizing linear transformations
- Matrix from visual representation of transformation
- Matrix vector products as linear transformations
- Linear transformations as matrix vector products
- Image of a subset under a transformation
- im(T): Image of a transformation
- Preimage of a set
- Preimage and kernel example
- Sums and scalar multiples of linear transformations
- More on matrix addition and scalar multiplication
© 2024 Khan AcademyTerms of usePrivacy PolicyCookie Notice
im(T): Image of a transformation
Showing that the image of a subspace under a transformation is also a subspace. Definition of the image of a Transformation. Created by Sal Khan.
Want to join the conversation?
- What is the difference between a "subspace" and a "subset"?(9 votes)
- A subspace is a subset that needs to be closed under addition and multiplication. That means if you take two members of the subspace and add them together, you'll still be in the subspace. And if you multiply a member of the subspace by a scalar, you'll still be in the subspace. If these two conditions aren't met, your set is not a subspace.
All subspaces are subsets, but not all subsets are subspaces. Just think of it as a definition. The intuition behind it is in Sals videos about subspaces.(20 votes)
- Why does the zero vector always have to be in a subspace V?(3 votes)
- Gavriel,
Remember the three rules that Sal gave for the definition of a subspace? They were:
1. Contains the 0 vector
2. Closed under scalar multiplication
3. Closed under vector addition.
Well, imagine a vector A that is in your subspace, and is NOT equal to zero. If rule #2 holds, then the 0 vector must be in your subspace, because if the subspace is closed under scalar multiplication that means that vector A multiplied by ANY scalar must also be in the subspace. Well suppose we multiply by the scalar 0? We would get the 0 vector. So for rule #2 to hold, the subspace must include the 0 vector. In other words, rule #1 must hold if rule #2 is to hold.
And honestly, rule #1 also must hold if rule #3 is to hold. After all, if A is a vector in our subspace, and so is -1*A (from rule #2) then the subspace must also include a zero vector because if vector addition holds, then the sum of any two vectors in our subspace must ALSO be in our subspace. Well, if A and -A are both in our subspace, then so must A+ (-A)… which is of course, the zero vector.
In fact, I'm not even sure why Sal lists it as a rule of subspaces that they include the 0 vector because with rule #2 or rule #3 they have to include the zero vector anyways.(11 votes)
- Can someone please explain more about how the image of the transformation (im(T)) is equivalent to the column space (C(A)) of the matrix that transformation can be represented as?(5 votes)
- We can fully define a linear transformation by deciding where it sends the basis vectors. Once we've done that, we can express the transformation as a matrix by writing the basis vectors as a row of column vectors, then replacing each by the vector we send it to.
e.g. the transformation that sends <1,0> to <3, 5, 2> and <0, 1> to <1, 8, 3> can be written as
3, 1
5, 8
2, 3
So reading off the column vectors lets us know a few vectors that our transformation definitely hits, since the basis vectors get mapped to them. Then linearity lets us map to every linear combination of these column vectors, i.e. the column space.(5 votes)
- Is Im(T) the same as Im(A) if T is the transformation and A the standard matrix?(2 votes)
- Given the equation
T(x) = Ax
,Im(T)
is the set of all possible outputs.Im(A)
isn't the correct notation and shouldn't be used. You can find the image of any function even if it's not a linear map, but you don't find the image of the matrix in a linear transformation.(5 votes)
- Why should the scalars be real why not complex?(4 votes)
- Is it safe to assume that where Sal is talking about real numbers I take it as complex? Or is dealing with complex numbers different?(2 votes)
- Let's say I have this Im(T), as in12:30, and I want to equal it to ker(T), what should I do? Just to simplify, consider Rn=Rm=R3.(4 votes)
- At1:44, am I right that zero vector condition is not totaly redundand, and zero vector must be a member of subspace to prevent empty set {} from being a valid subspace? Empty set {} is closed under addition and multiplication, but we have no other vector to multiply it by zero to get zero vector.(3 votes)
- It is sort of redundant to explicitly include the zero vector. However, in those textbooks that only list two properties (closure under scalar multiplication and closure under vector addition), you'll notice that they explicitly state that V is a non-empty subset of a certain vector space (R^n, for instance). So yes, you can (if you want) avoid mentioning that the zero vector is a member of V but ONLY if you explicitly mention that V is non-empty (in order to avoid the nullset being a valid subspace, as you indicated in your question).
Personally, I prefer to explicitly state that the zero vector is a member of V because:
1.) I'm more likely to forget to mention that V is a non-empty subset
2.) The absolute EASIEST way to prove that a subset is NOT a subspace is to show that the zero vector is not an element (and explicitly mentioning that the zero vector must be a member of a certain set in order to make it a valid subspace reminds me to check that part first).(3 votes)
- Would it be possible to map from R^3 to R^2? What would happen to our image?(2 votes)
- Here's a map from R^3 to R^2, map (x, y, z) to (0, 0). This is the zero map. In general if you have a vector in R^3 and hit it with a matrix on the left that has 3 columns and 2 rows then you map from R^3 to R^2.(2 votes)
- If we let matrix A=T then I want to know if the basis of column space is the basis of the image space?(2 votes)
- The column space of a matrix is the same as the image of the domain under the transformation. Given:
T(x) = Ax
We know:Im(T) = C(A)
(2 votes)
- As far as I understand for a subset to be a subspace it must be nonempty, but not necessarily contain the 0 vectors. Of course, if it doesn't contain the 0 vectors most of the time it's not gonna work. This is what the book Linear algebra by Steven J Leon, University of Massachuset says: "If S is a nonempty subset of a vector space V, and S satisfies the conditions (i) α x ∈ S whenever x ∈ S for any scalar α
(ii) x+y ∈ S whenever x ∈ S and y ∈ S then S is said to be a subspace of V." So which is which?(2 votes)- If a subset meets your two criteria, then it necessarily contains the zero vector.
It's nonempty, so it must contain some vector v.
By your first criterion, it must contain -v.
By your second criterion, it must contain v+(-v)=0.
So including the explicit statement "S contains the zero vector" is unnecessary.(2 votes)
Video transcript
Let's say that I have some set
V that is a subspace in Rn. And just as a reminder,
what does it mean? That's just some set, or some
subset of Rn where if I take any two members of that subset--
so let say I take the members a and b-- they're both
members my subspace. By the fact that this is a
subspace, we then know that the addition of these two
vectors, or a plus b, is also in my subspace. And this is our closure
under addition. And by the fact that it's a
subspace, we also know that if we multiply any member of our
subspace by a scalar -- so the fact that those guys are members
of our subspace -- we also know that if I pick one
of them, let's say a, and I multiply a by some scalar, that
this is also going to be a member of our subspace. And we sometimes call this
closure under scalar multiplication. And then a somewhat redundant
statement is that V, well it must contain the zero vector. And that's true of
all subspaces. V -- let me write it this
way -- the zero vector is a member of V. And it would be the zero vector
with n components here, because V is a subspace of Rn. And why I say that's redundant,
because if I say that any multiple of these
vectors is also in V, I could just set the scalar
to be equal to 0. So this statement kind
of takes the statement into account. But in a lot of textbooks, they
will always write, oh and the zero vector has to
be a member of V. Although, that's kind of
redundant with the closure under scalar multiplication. Fair enough. Now, let's say that I also have
some transformation T. It is a mapping, a function,
from Rn to Rm. What I want to understand, in
this video is, I have a subspace right here, V. I want to understand whether
the transformation of the subspace -- and what
did we call that? We called that the image of our
subspace, or our subset, either way. The image of V under T. In the last video, just to kind
of help you visualize it. How did that work or -- we
had some subset of Rn that looked like this. It was a triangle that looked
something like that. And that was in Rn, this was
actually in R2, it was a triangle that looked something
like that. And we figured out it's
image under T. So we went from R2 to R2. and we had our transformation. And it ended up looking
something like this. If I remember it properly. It ended up looking like a --
gee, I don't remember it fully, but it was like a
triangle that was skewed like this, rotated. So it was a -- actually I think
it was more like -- I think that's right. It was rotated a bit clockwise
like that and it was skewed. But the exact particulars
of that last video aren't what matter. What matters is that you are
able to visualize what an image under transformation
means. It means you take some subset of
R2, all of the vectors that define this triangle
right here. That's some subset of R2. You transform all of them, and
then you get some subset in your codomain. You could call this the image,
because the transformation of that triangle, or if we call
this s, it's equal to the transformation of s. Or you could say it's the image
of-- you can just call it the set s, but maybe it helps
you to visualize-- call it the image of this
triangle under T. Or maybe even a neater way of
thinking about it is, this triangle-- that skewed, rotated
triangle-- this one is the image of this right
triangle under T. I think that might make
a little bit of visual sense to you. And just as a bit of reminder,
in that last video these triangles, these weren't
subspaces. And just as you could take
scalar multiples of some of the vectors that are members of
this triangle, and you'll find that they're not going
to be in that triangle. So this wasn't a subspace, this
was just a subset of R2. All subsets are not subspaces,
but all subspaces are definitely subsets. Although something can be
a subset of itself. I don't want to wander
off too much. But this just helps
you visualize what we mean by an image. It means all of the vectors that
are mapped to, from the members of your subset. So I want to know whether
the image of V under T is a subspace. So in order for it to be a
subspace, if I take the transformation -- let me
find two members of T. Well clearly if I take the
transformation of any members of V, I'm getting members
of the image. Right? So I can write this. Clearly the transformation of
a and the transformations of b, these are both of members
of our images of V under T. These are both members
of that right there. So my question to you is what
is the transformation of a plus the transformation of b? And the way I have written this,
these are two arbitrary members of our image
of V under T. Or maybe I should call
it T of capital V. These are two arbitrary
members. So what is this equal to? Well, we know from our
properties, our definition of linear transformations, the sum
of the transformations of two vectors is equal to the
transformation of the sum of their of vectors. Now, is the transformation
of a plus b, is this a member of TV? Is it a member of our image? Well, a plus b is a member of V,
and the image contains the transformation of all
of the members of V. So the image contains the
transformation of this guy. This guy, a plus b
is a member of V. So you're taking a
transformation of a member of V which, by definition, is in
your image of V under T. So this is definitely true. Now, let's ask the
next question. If I take a scalar multiple of
some member of my image of V under T, or my T of capital
V, right there. If I take the sum scalar,
what is this equal to? By definition for linear
transformation, this is the same thing as a transformation
of the scalar times the vector. Now is this going to
be a member of our image of V under T? Well we know that ca is
definitely in V, right? That's from the definition
of a subspace. This is definitely in V. And so, if this is in V, the
transformation of this has to be in V's image under T. So this is in -- this is
also a member of V. And obviously, you can
set this equal to 0. The zero vector is a member of
V, so any transformation of -- if you just put a 0 here, you'll
get the zero vector. So the zero vector is definitely
-- I don't care what this is, if you multiply
it times 0, you are going to get the zero vector. So the zero vector
is definitely also a member of TV. So we come on the result that
T -- the image of V under T, is a subspace. Which is a useful result
which we will be able to use later on. But this, I guess, might
naturally lead to the question, what if we go --
everything we have been dealing with so far have been
subsets, with the case of this triangle, or subspaces,
in the case of V. But what if I were to take the
image Rn under T, right? This is the image
of Rn under T. Let's think about
what this means. This means, what do we get when
we take any member of Rn, what is the set of all
of the vectors? Then when we take the
transformation of all of the members of Rn, let
me write this. This is equal to the set of the
transformation of all of the x's, where each x
is a member of Rn. So you take each of the members
of Rn and transform them, and you create
this new set. This is the image
of Rn under T. Well, there's a couple of ways
you can think of this. Remember when we defined
-- let's see, T is a mapping from Rn to Rm. We defined this as the domain. All of the possible inputs
for our transformation. And we define this
as the codomain. And remember I told you that
the codomain is essentially part of the definition of
the function or of the transformation, and it's the
space that we map to. It's not necessarily all
of the things that we're mapping to. For example, the image of Rn
under transformation, maybe it's all of Rm or maybe it's
some subset of Rn. The way you can think about it,
and I touched on this in that first video, is-- and
they'll never, or at least the linear algebra books I looked
at, they didn't specify this-- but you can kind of view
this as the range of T. These are the actual members
of Rm that T maps to. That if you take the image of
Rn under T, you are actually finding-- let's say that
Rm looks like that. Obviously it will go
in every direction. And let's say that when
you take-- let me draw Rn right here. And we know that T is a
mapping from Rn to Rm. But let's say when you take
every element of Rn and you map them into Rm, let's say
you get some subset of Rm, let's say you get something
that looks like this. So let me see if I can
draw this nicely. So you literally map every point
here, and it goes to one of these guys. Or one of these guys can be
represented as a mapping from one of these members
right here. So if you map all of them you
get this subset right here. This subset is, this is
T the image of Rn, the image of Rn under T. And in the terminology that
you don't normally see in linear algebra a lot,
you can also kind of consider it its range. The range of T. Now, this has a special name. This is called -- and I don't
want you to get confused -- this is called the image of T. Image of T. This might be a little
confusing, image of T. So this is sometimes written
as just im of T. Now you are a little confused
here, you are like, before when we were talking about
subsets, we would call this the image of R subset under T. And that is the correct
terminology when you're dealing with a subset. But when you take, all of
a sudden, the entire n dimensional space, and you're
finding that image, we call that the image of the actual
transformation. So we can also call this set
right here the image of T. And now what is the
image of T? Well, we know that we can
write any-- and this is literally any-- so T is
going from Rn to Rm. We can write T of x-- we
can write any linear transformation like this-- as
being equal to some matrix, some m by n matrix
times a vector. And these vectors obviously
are going to be members of Rn-- times sum Rn. And what is this? So what is the image -- let
me write it in a bunch of different ways -- what is
the image of Rn under T? So we could write that as T --
let me write it this way. We could write that as T of Rn,
which is the same thing as the image of T. Notice we're not saying under
anything else, because now were saying the image of the
actual transformation. Which we could also write
as the image of T. Well what are these equal to? This is equal to the set of all
the transformations of x. Well all the transformations of
x are going to be Ax where x is a member of Rn. So x is going to be an n-tuple,
where each element has to be a real number. So what is this? So if we write A-- let
me write my matrix A. It's just a bunch of column
vectors, a1, a2. It's going to have n
of these, right? Because it has n columns. And so a times any x is going to
be-- so if I multiply that times any x that's
a member of Rn. I multiply x1, x2, all
the way to xn. We've seen this multiple,
multiple times. This is equal to x1-- the scalar
x1, times a1, plus x2 times a2, all the way
to plus xn times an. And we're saying we want the
set of all of these sums of these column vectors, where x
can take on any vector in Rn. Which means that the elements
of x can take on any real scalar values. So the set of all of these is
essentially all of the linear combinations of the columns
of a, right? Because I can set these guys
to be equal to any value. So what is that equal to? That is equal to, and we
touched on this, or we actually talked about this when
we introduced the idea. This is equal to the
column space of A. Or we just denoted it
sometimes as C of A. So that's a pretty
neat result. If you take -- it's almost
obvious, I mean it's just I'm playing with words a little
bit-- but any linear transformation can be
represented as a matrix vector product. And so the image of any linear
transformation, which means the subset of its codomain,
when you map all of the elements of its domain into
its codomain, this is the image of your transformation. This is equivalent to the column
space of the matrix that you're transformation
could be represented as. And the column space, of course,
is the span of all the column vectors of your matrix. This is just all of the linear
combinations, or the span, of all of your column vectors,
which we do right here. Anyway hope you found that a
little interesting, and you will be able to use these
results in the future.