If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Computing a Jacobian matrix

This finishes the introduction of the Jacobian matrix, working out the computations for the example shown in the last video.

Want to join the conversation?

  • starky ultimate style avatar for user Chiarandini Pandetta
    So what exactly is the use of the Jacobian matrix? I understand the local linearity, but how is this useful?

    For some reason, I think this relates to optimisation problems on curved surfaces, is my intuition right?
    (13 votes)
    Default Khan Academy avatar avatar for user
    • piceratops ultimate style avatar for user Mathuss
      I know this is really late but for anyone else who's curious:

      The Jacobian is probably most often used when doing a variable change in an integral, for example, when switching from (x, y) Cartesian coordinates to (r, theta) polar coordinates.

      The reason this is important is because when you do a change like this, areas scale by a certain factor, and that factor is exactly equal to the determinant of the Jacobian matrix. For example, the determinant of the appropriate Jacobian matrix for polar coordinates is exactly r, so

      Integrate e^(x^2+y^2) across R^2

      would turn into

      Integrate r*e^(r^2) across R^2

      Notice that not only were the x's and y's substituted with the r in the exponent, but the Jacobian r had to be multiplied into the integral as well in order to account for this "scaling" of area upon changing variables like this.
      (22 votes)
  • blobby green style avatar for user Erwin
    I don't understand why we focus only on the local "rotation" of the transformation, and not on the "translation" (movement) across the space. In the inset animation, why do we stay centered on that original point as it moves across the 2d space?

    Having watched the next video in this series, I wonder if this is because Jacobians are only useful for their determinant, and the determinant (area) doesn't care about movement...
    (4 votes)
    Default Khan Academy avatar avatar for user
    • aqualine ultimate style avatar for user George Evans
      I didn't know the answer when I read this earlier, but now I have a thought that might help:

      The Jacobian isn't telling us how the space changes when the transformation is applied to it. That is what the function f tells us. The function f tells us about the "translation" of the square. However, the Jacobian tells us how movement in the un-transformed space corresponds to movement in the transformed space. This movement is often (but not always) rotational. In order to see this movement, we must move and rotate the square.
      (4 votes)
  • blobby green style avatar for user Laura Houghton
    At the rightward and downward components of the vector are shown to be roughly 1 and -.42 the size of the vector respectively, corresponding with the first and second partial derivatives with respect to dx. But shouldn't these figures relate to dx, not the transformed vector?
    (4 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user snaqvi69
    Great video. Can you please explain why the partial derivative terms are ordered in the way they are. In linear transformations, the terms correspond to the image of the basis vectors. Is there a similar connection in the case of a Jacobian matrix?
    (4 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user mathisduguin
    If you plug in (pi/2,pi/2), you get the identity matrix. Is there a special name for points that have an identity local linear transformation?
    (4 votes)
    Default Khan Academy avatar avatar for user
  • female robot amelia style avatar for user shekharyadav380
    Does the Jacobian matrix represent the transformation matrix near the point for which the Jacobian was calculated ??
    (2 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user Pranav Chaturvedi
      Yes, it tells how local points are transformed. In the particular point(-2, 1) used in the video, the Jacobian Matrix is defining how the points near to that area get transformed. More precisely I think, Jacobian Matrix tells how the origin(0,0) would be transformed if the same linear transformation was applied that we got by calculating it at (-2,1). Does that make sense?
      (3 votes)
  • blobby green style avatar for user diamantidisno3
    At could we add the green and red vector to find the total change of our vector valued function?
    (3 votes)
    Default Khan Academy avatar avatar for user
  • purple pi purple style avatar for user Yevtushenko Oleksandr
    () How can x component stay the same and it's derivative be 1 ?
    If there's no change then why derivative is not 0 ?
    (2 votes)
    Default Khan Academy avatar avatar for user
    • ohnoes default style avatar for user Charles Morelli
      I'm afraid the logic in the question is self-defeating... if the derivative of the x-component was zero it would have changed (from 1) and so - according to the question's logic - its derivative couldn't be zero.
      Let's think about this...
      Take (in single-variable calculus):
      f(x): y = x + a
      and let a = 0 (just for simplicity in this example)
      [the 'a' would disappear on differentiation, anyway, being a constant... this is, in fact, essentially what the video is dealing with at that time in:
      f1 = x + sin(y)
      because we are differentiating with respect to x, so the y-value - and hence sin(y) - is held constant, so disappears when taking the derivative]
      then, in single-variable calculus:
      f' = dy/dx = 1
      everywhere because it's a straight line with a constant slope of 1, so when:
      x = 1, f(1) = 1 and f'(1) = 1
      So there is no problem with the output of a derived function equalling the original function instantaneously or even continuously (take:
      y = e^x => dy/dx = e^x).
      The derivative of a function is only zero regardless of the input when that function output is constant, regardless of the input (e.g.: f(x) = 3 ).
      That is not the case in the video.
      (2 votes)
  • blobby green style avatar for user kazemihabib1996
    I tried multiplying the jacobian matrix and the vector [-2, 1] and expected to get the same result as putting the value in the [x + sin(y), y + sin(x)] which is [-1.1585290151921035, 0.09070257317431829] but I got this [-1.45969769, 1.83229367]. Could you please explain the reason?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Laura Houghton
    Anybody with any intuition as to the formula for2d curl from this example?
    (2 votes)
    Default Khan Academy avatar avatar for user

Video transcript

- [Teacher] So, just as a reminder of where we are, we've got this very non-linear transformation and we showed that if you zoom in on a specific point while that transformation is happening, it looks a lot like something linear and we reason that you can figure out what linear transformation that looks like by taking the partial derivatives of your given function, the one that I defined up here, and then turning that into a matrix. And what I want to do here is basically just finish up what I was talking about by computing all of those partial derivatives. So, first of all, let me just rewrite the function back on the screen so we have it in a convenient place to look at. The first component is x plus sin of y. Sin of y and then y plus sin of x was the second component. So, what I want to do here is just compute all of those partial derivatives to show what kind of thing this looks like. So, let's go ahead and get rid of this word and I'll go ahead and kind of redraw the matrix here. So, for that upper left component, we're taking the partial derivative with respect to x of the first component. So, we look up at this first component and the partial derivative with respect to x is just one. Since there's one times x plus something that has nothing to do with x and then below that, we take the partial derivative of the second component with respect to x down here. And that guy, the y, well that looks like a constant so nothing happens, and the derivative of sin of x becomes cosine of x. And then up here, we're taking the partial derivative with respect to y of the first component; that upper one here, and for that, partial derivative of x, with respect to y, is zero and partial derivative of sin of y, with respect to y, is cosine of y. And then, finally, the partial derivative of the second component with respect to y looks like one because it's just one times y plus some constant. And this is the general Jacobian as a function of x and y, but if we want to understand what happens around this specific point that started off at, well, I think I recorded it here at negative two, one, we plug that in to each one of these values. So, when we plug in negative two, one. So go ahead and just kind of again, rewrite it to remember we're plugging in negative two, one as our specific point, that matrix as a function, kind of a matrix valued function, becomes one, and then next we have cosine, but we're plugging in negative two for x, cosine of negative two, and if you're curious, that is approximately equal to, I calculated this earlier. Negative zero point four two, if you just want to think in terms of a number there. Then for the upper right, we have cosine again, but now we're plugging in the value for y, which is one and cosine of one is approximately equal to zero point five four; and then bottom right, that's just another constant: one. So, that is the matrix, just as a matrix full of numbers, and just as kind of a gut check we can take a look at the linear transformation this was supposed to look like, and notice how the first basis factor, the thing it got turned into, which is this factor here, does look like it has coordinates one and negative zero point four two, right? It's got this rightward component that's about as long as the vector itself started and then this downward component, which I think that's pretty believable that that's negative zero point four two. And then, likewise, this second column is telling us what happened to that second basis factor, which is the one that looks like this. And again, its y component is about as long as how it started, right, the length of one. And then the rightward component is around half of that, and we actually see that in the diagram, but this is something you compute. Again, it's pretty straightforward. You just take all of the possible partial derivatives, and you organize them into a grid like this. So, with that, I'll see you guys next video.