If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains ***.kastatic.org** and ***.kasandbox.org** are unblocked.

Main content

Current time:0:00Total duration:6:22

in the last video we were looking at this particular function it's a very nonlinear function and we were picturing it as a transformation that takes every point XY in space to the point X plus sine Y Y plus sine of X and moreover we zoomed in on a specific point and let me actually write down what point we zoomed in on it was negative to one that's something we're going to want to record here negative to one and I added a couple extra grid lines around it just so we can see in detail what the transformation does two points that are in a neighborhood of that point and over here this square shows the zoomed in version of that neighborhood and what we saw was that even though the function as a whole as a transformation looks rather complicated around that one point it looks like a linear function it's locally linear so what I'll show you here is what matrix is going to tell you the linear function that this looks like and this is going to be some kind of two by two matrix I'll make a lot of room for ourselves here it'll be a two by two matrix and the way to think about it is to first go back to our original set up before the transformation and think of just a tiny step to the right what I'm going to think of as a little partial X a tiny step in the X direction and what that turns into after the transformation is going to be some tiny step in the output space and here let me actually kind of draw on what that tiny step turned into it's no longer purely in the X direction it has some rightward component but now also some downward component and to be able to represent this in a nice way what I'm going to do is instead of writing the entire function as something with a vector valued output I'm going to go ahead and represent this as a two separate scalar valued functions I'm going to write the scalar value functions F 1 of X Y so I'm just giving a name to X plus sign Y and F 2 of X Y again all I'm doing is giving a name to the functions we already have written down is when I look at this vector the consequence of taking a tiny DX step in the input space that corresponds to some 2d movement in the output space and the X component of that movement right if I was going to draw this out and say hey what's the X component of that movement that's something we think of as a little partial change in f1 the X component of our output and if we divide this if we take you know partial f1 divided by the size of that initial tiny change it basically scales it up to be a normal sized vector not a tiny nudge but something that's more constant that doesn't shrink as we zoom in further and further and then similarly the change in the Y direction right the vertical component of that step that was still caused by the DX right it's still caused by that initial step to the right that is going to be the tiny partial change in F to the Y component of the output because here we're all just looking in the output space that was caused by a partial change in the X direction and again I kind of like to think about this we're dividing by a tiny amount this partial f2 is really a tiny tiny nudge but by dividing by the size of the initial tiny nudge that caused it we're getting something that's basically a number something that doesn't shrink when we consider more and more zoomed in versions so that that's all what happens when we take a tiny step in the x direction but another thing you could do another thing you can consider is a tiny step in the Y direction right is we want to know hey if you take a single step some tiny unit upward what does that turn into after the transformation and what that looks like what that looks like is this vector that still has some upward component but it also has a rightward component and now I'm going to write its components as the second column of the matrix because as we know when you're representing a linear transformation with a matrix the first column tells you where the first basis vector goes and the second column shows where the second basis vector goes if that feels unfamiliar either check out the refresher video or maybe go and look at some of the linear algebra content but to figure out the coordinates of this guy we do basically the same thing we say first of all the change in the X Direction here the X component of this nudge vector that's going to be given as a partial change to F one right to the X component of the output here we're looking in the output space so we're dealing with F 1 F 1 and F 2 and lasting what that change was it was caused by a tiny change in the y direction so the change in f1 caused by some tiny step in the Y Direction divided by the size of that tiny step and then the y component of our output here the y component of this step in the output space that was caused by the initial tiny step upward in the input space well that is the change of F to second component of our output as caused by dy as caused by that little partial Y and of course all of this is very specific to the point that we started at right we started at the point negative 2 1 so each of these partial derivatives is something that really we're saying don't take the function evaluate it at the point 2 negative 1 and when you evaluate each one of these at the point 2 negative 1 you'll get some number and that will give you a very concrete 2x2 matrix that's going to represent the linear transformation that this guy looks like once you've zoomed in so this matrix here that's full of all of the different partial derivatives has a very special name it's called as you may have guessed the Jacobian or more fully you'd call it the Jacobian matrix and one way to think about it is that it carries all of the partial differential information right it's taking into account both of these components of the output and both possible inputs and give you kind of a grid of what all the partial derivatives are but as I hope you see it's much more than just a way of recording what all the partial derivatives are there's a reason for organizing it like this in particular and it really does come down to this idea of local linearity if you understand that the Jacobian matrix is fundamentally supposed to represent what a transformation looks like when you zoom in near a specific point almost everything else about it will start to fall in place and in the next video I'll go ahead and actually compute this just to show you what the process looks like and how the result we get kind of matches with the picture we're looking at see you then