Main content
Multivariable calculus
Course: Multivariable calculus > Unit 2
Lesson 13: JacobianJacobian prerequisite knowledge
Before jumping into the Jacobian, it's important to make sure we all know how to think about matrices geometrically. This is targetted towards those who have seen linear algebra but may need a quick refresher.
Want to join the conversation?
- The benefits of watching your 'Essense of Linear Algebra' series ;-)(37 votes)
- Does this only apply for square matrices? Thanks(3 votes)
- No, this applies to all matrices. In fact, if you check out the videos on Linear Transformations in the Linear Algebra section, you'll learn all about them. Actually, if you have the time (and the interest), you should definitely consider starting from the beginning of Linear Algebra--it's all really helpful to learn.(5 votes)
- How is the cooridnate system curvy in nature for after transformation. The answer would just be a vector but must have a coordinate system. I could have found that out, if I had matrix A , such that I could find the alternate basis coordinate system, but that is not available too.(1 vote)
- Hi, I'm trying to learn Jacobians and I'm really confused. Can someone tell me which one is the series where this is covered? Is it the matrix transformation series? This one: https://www.khanacademy.org/math/linear-algebra/matrix-transformations? Thanks in advance.(1 vote)
- The Jacobian series can be found here: https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives#jacobian(1 vote)
- I see how the matrix, call it M, maps a point in the premapped space to a new point in the mapped space. But, when you play your transformation, how are interim points generated - during the animation, between the starting and ending points? It seems you have interim matrices between I (the starting point) to M (the ending point) what are these interim matrices? Like, if I wanted to program a transformation - I mean the animation part in between.(1 vote)
- Awesome "review" for someone who took vector class 3 years ago and doesn't remember what was covered.(1 vote)
Video transcript
- Hello, everyone. In
these next few videos, I'm going to be talking
about something called, the Jacobian, and more specifically,
it's the Jacobian matrix, or sometimes the associated determinant, and here, I just want to talk about some of the background
knowledge that I'm assuming, because to understand the Jacobian, you do have to have a little bit of a background in linear
algebra, and in particular, I want to make sure that
everyone here understands how to think about matrices
as transformations of space, When I say transformations, here, let me just get
kind of a matrix on here. I'll call it two one
and negative three one. You'll see why I'm coloring
it like this in just a moment. When I say, how to think about this as a transformation of space, I mean, you can multiply a matrix by some kind of two-dimensional vector, some kind of two-dimensional x y, and this is going to give us
a new two-dimensional vector. This is going to bring us
to, let's see in this case, it'll be, I'll write kind of two
one negative three one, where what it gives us is two x plus negative three times y, and then, one x plus one times y. Right? This is a new
two-dimensional vector somewhere else in space, and even if you know how to compute it, there's still room for a
deeper geometric understanding of what it actually means
to take a vector x y to the vector two x plus negative three y and one x plus one y. There's also still a deeper understanding in what we mean when we call
this a linear transformation, a linear transformation. What I'm going to do is just show you what this particular
transformation looks like on the left here, where every single point on this blue grid, I'm
going to tell the computer, "Hey, if that point was x
y, I want you to take it to two x plus negative
three y, one x plus one y. Here's what it looks like. Let me just kind of play it out here. All of the points in space move, and you end up in some final state here. There are a couple
important things to note. First of all, all of the grid lines remain parallel and evenly spaced, and they're still lines. They didn't get curved in some way, and that's very, very special. That is the geometric
way that you can think about this term, this idea
of a linear transformation. I kind of like to think about
it that lines stay lines, and in particular the grid lines here, the ones that started off as kind of vertical and horizontal, they still remain parallel, and they still remain evenly spaced. The other thing to notice here is I have these two vectors highlighted: the green vector and the red vector. These are the ones that started off, if we kind of back things up, these are the ones that started off as the basis vectors, right? Let me kind of make a
little bit more room here. The green vector is one zero, one in the x-direction,
zero in the y-direction, and then that red vertical vector here, is zero one, zero one. If we notice where they land, under this transformation, when the matrix is multiplied by every single vector in space, the place where the green vector lands, the one that started off as one zero, has coordinates two one, and that corresponds very
directly with the fact that the first column of
our matrix is two one. Then, similarly, over
here, the second vector, the one that started off as zero one, ends up at the coordinates
negative three one, and that's what corresponds with the fact that the next column
is negative three one. It's actually relatively simple to see why that's going to be true. Here, I'll go ahead and
multiply this matrix that we had that was ... See now it's kind of easy to remember what the matrix is, right? I can just kind of read it off here as two one negative three one, but just to see why it's
actually taking the basis vectors to the columns like this, when you do the
multiplication by one zero, notice how it's going to take us to, so it's two times one, that'll be two, and then, negative three times zero, so that'll just be zero, and over here, it's one times one, so that's one, and then, one times zero, so again we're adding zero. The only terms that actually matter because of the zero down here, was everything in that first column. Similarly, if we take that same matrix, two one negative three one, and we multiply it by zero one over here, by the second basis vector, what you're going to get, is two times zero, so zero, plus that element in that second column, and then, one times zero, so another zero, plus one times one, plus that one. Again, it's kind of like
that zero knocks out all of the terms in other columns. Then, like I said, geometrically, the meaning of a linear transformation is that grid lines remain
parallel and evenly spaced. When you start to think
about it a little bit, if you can know where
the screen vector lands and where the spread vector lands, that's going to lock into place where the entire grid has to go. Let me show you what I mean
and how this corresponds with, maybe, a different
definition that you've heard for what linear transformation means. If we have some kind of function L, and it's going to take in a
vector and spit out a vector, it's said to be linear if
it satisfies the property that when you take a
constant times a vector, what it produces is
that same constant times whatever would have
happened if you applied that transformation to the
vector, not scaled, right, so here you're applying
that transformation to a scaled vector, and evidently, that's the same as scaling the
transformation of the vector. Similarly, second property of linearity is that if you add two vectors, it doesn't really matter if you add them before or after the transformation. If you take the sum of the vectors then apply the transformation, that's the same as first
applying the transformation to each one separately, and then adding up the results. One of the most important consequences of this formal definition of linearity, is that it means if you take your function and apply it to some vector x y, I can split up that vector as x times the first basis vector, x times one zero plus y, let's see y, times that
second basis vector, zero one, and because of these two
properties of linearity, if I can split it up like this, it doesn't matter if I
do the scaling and adding before the transformation, or if I do that scaling and adding after the transformation, and say that it's x times
whatever the transformed version of one zero is, and I'll show you
geometrically what this means in just a moment, but
I kind of want to get all the algebra on the screen, plus y times the transformed version of zero one, zero one. To be concrete, let's actually put in a value for x and y here, and try to think about that
specific vector geometrically. Maybe I'll put in something
like vector two one. If we look over on the grid, we're going to be focusing
on the point that's over here at two one, and this particular point. I'm going to play the transformation, and I want you to follow this
point to see where it lands, and it's going to end up over here. Okay, so, in terms of the old grid, right, the original one that we started with, it's now at the point one three. This is where we've ended up, but importantly, I want you
to notice how it's still two times that green vector
plus one times that red vector. It's satisfying that
property that it's still x times whatever the transformed version of that first basis vector is, plus y times the transformed version of that second basis vector. That's all just a little overview, and the upshot, the main thing I want you
to remember from all of this is when you have some kind of matrix, you can think of it as a
transformation of space that keeps grid lines
parallel and evenly spaced. That's a very special
kind of transformation. That is a very restrictive
property to have on a function from 2-D
points to other 2-D points. The convenient way to encode that, is that the landing spot
for that first basis vector, the one that started off
one unit to the right, is represented with the
first column of the matrix, and the landing spot for
the second basis vector, the one that was pointing one unit up, is encoded with that second column. If this feels totally unfamiliar, or you want to learn more about this, it's something that I've made
other videos on in the past, but in terms of understanding
the Jacobian matrix, where we're going with this, and kind of getting a
geometric feel for it, that short overview that I gave should be enough to get us going. With that, I will see you next video.