If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Gradient

The gradient captures all the partial derivative information of a scalar-valued multivariable function. Created by Grant Sanderson.

Want to join the conversation?

  • blobby green style avatar for user Franz Markovic
    What is a partial derivative operator?Especially what is operator?
    (11 votes)
    Default Khan Academy avatar avatar for user
    • aqualine ultimate style avatar for user Mateusz
      Operation is just a function. Just another name for it.
      Operator is a symbol which correspondences to a function.
      Example:
      Addition is a operation. In other words, it is a function. It's domain is (R x R) (where R is a set of real numbers), and its' codomain is R. (you take two real numbers and obtain a result, one real number)
      You can write it like this: +(5,3)=8. It's a familiar function notation, like f(x,y), but we have a symbol + instead of f. But there is other, slightly more popular way: 5+3=8. When there aren't any parenthesis around, one tends to call this + an operator. But it's all just words.

      Partial derivative operator, nabla, upside-down triangle, is a symbol for taking the gradient, which was explained in the video.
      Sidenote: (Sometimes the word "operator" is interchangeable with "operation", but you see this all the time. Words like "cook" (the person) and "(to) cook" are almost the same, because we tend to think of things that do the actions as the actions themselves)
      (59 votes)
  • hopper cool style avatar for user Arnab Chowdhury
    He expressed the gradient like we do in matrix using the square brackets thingy. So gradient is a vector & a matrix?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user nele.labrenz
    At , when we take the derivative of f in respect to x, therefore take y = sin(y) as a constant, why doesn't it disappear in the derivative?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • piceratops ultimate style avatar for user Gopu Kapoor
    For nabla, is the order of the components in the vector dependent on the order of the variables in the function call? For example, would:
    del f(y, x)

    be equal to:
    <df/dy, df/dx>
    ?
    Another example of my question would be:
    del f(x1, x2, x3, x4)

    translates to:
    <df/dx1, df/dx2, df/dx3, df/dx4>
    .
    (3 votes)
    Default Khan Academy avatar avatar for user
    • hopper cool style avatar for user Iron Programming
      Good question! At first it kind of seems an obvious thing to state, but we can't make assumptions in math now can we.

      To answer your question, in my experience we always calculate the gradient in order of the operands, like you described.

      Happy learning!
      - Convenient Colleague
      (3 votes)
  • mr pink red style avatar for user Bhavishey  Thapar
    The function f (x,y) =x^2 * sin (y) is a three dimensional function with two inputs and one output and the gradient of f is a two dimensional vector valued function. So isn't he incorrect when he says that the dimensions of the gradient are the same as the dimensions of the function. I think it is always one less.
    (2 votes)
    Default Khan Academy avatar avatar for user
    • hopper jumping style avatar for user Armen Minassian
      The dimension of the gradient is always the same as the dimension of the input space.
      This is due to the way we construct the gradient : we add a component for each variables.
      In this case we have a two-dimensional input space, and therefore a two-dimensional gradient :)
      (4 votes)
  • aqualine ultimate style avatar for user Edwind
    What was the other name for gradient? At ish
    (2 votes)
    Default Khan Academy avatar avatar for user
  • piceratops seed style avatar for user Beni Csordas
    This dude is 3Blue1Brown isn't he?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • spunky sam green style avatar for user Dean Wanez
    Is it possible to include the subtitles in the videos like the way Youtube does?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user jc mahne
    I love this new series of videos.
    Hoping you'll put some Tutorial notes with it.
    How would you describe the way the Total Derivative works versus the Full (del operator) Derivative ?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • spunky sam blue style avatar for user Arunabh Bhattacharya
    In the computer display, there is something called gradient. Does it have anything to do with the gradient being discussed here in Math?
    (0 votes)
    Default Khan Academy avatar avatar for user
    • hopper happy style avatar for user Will Springer
      The gradient that you are referring to—a gradual change in color from one part of the screen to another—could be modeled by a mathematical gradient.

      Since the gradient gives us the steepest rate of increase at a given point, imagine if you:
      1) Had a function that plotted a downward-facing paraboloid (like x^2+y^2+z = 0. Take a look at the graph on Wolfram Alpha)
      2) Looked at that plot from the top down
      3) Programmed your computer so the steeper the slope of the vector at a given point, the darker the color it placed at that point.

      That would effectively draw a circular color gradient, where the part of the circle near (x,y) = (0,0) would be lighter and would grow darker as you moved further out in the x and y directions.
      (5 votes)

Video transcript

- [Voiceover] So here I'm gonna talk about the gradient. And in this video, I'm only gonna describe how you compute the gradient, and in the next couple ones I'm gonna give the geometric interpretation. And I hate doing this, I hate showing the computation before the geometric intuition since usually it should go the other way around, but the gradient is one of those weird things where the way that you compute it actually seems kind of unrelated to the intuition and you'll see that. We'll connect them in the next few videos. But to do that, we need to know what both of them actually are. So on the computation side of things, let's say you have some sort of function. And I'm just gonna make it a two-variable function. And let's say it's f of x, y, equals x-squared sine of y. The gradient is a way of packing together all the partial derivative information of a function. So let's just start by computing the partial derivatives of this guy. So partial of f with respect to x is equal to, so we look at this and we consider x the variable and y the constant. Well in that case sine of y is also a constant. As far as x is concerned, the derivative of x is 2x so we see that this will be 2x times that constant sine of y, sine of y. Whereas the partial derivative with respect to y. Now we look up here and we say x is considered a constant so x-squared is also considered a constant so this is just a constant times sine of y, so that's gonna equal that same constant times the cosine of y, which is the derivative of sine. So now what the gradient does is it just puts both of these together in a vector. And specifically, maybe I'll change colors here, you denote it with a little upside-down triangle. The name of that symbol is nabla, but you often just pronounce it del, you'd say del f or gradient of f. And what this equals is a vector that has those two partial derivatives in it. So the first one is the partial derivative with respect to x, to x times sine of y. And the bottom one, partial derivative with respect to y X-squared cosine of y. And notice, maybe I should emphasize, this is actually a vector-valued function. So maybe I'll give it a little bit more room here and emphasize that it's got an x and a y. This is a function that takes in a point in two-dimensional space and outputs a two-dimensional vector. So you could also imagine doing this with three different variables. Then you would have three partial derivatives, and a three-dimensional output. And the way you might write this more generally is we could go down here and say the gradient of any function is equal to a vector with its partial derivatives. Partial of f with respect to x, and partial of f with respect to y. And in some sense, we call these partial derivatives. I like to think as the gradient as the full derivative cuz it kind of captures all of the information that you need. So a very helpful mnemonic device with the gradient is to think about this triangle, this nabla symbol as being a vector full of partial derivative operators. And by operator, I just mean like partial with respect to x, something where you could give it a function, and it gives you another function. So you give this guy the function f and it gives you this expression, this multi-variable function as a result. So the nabla symbol is this vector full of different partial derivative operators. And in this case it might just be two of them, and this is kind of a weird thing because it's like what, this is a vector, it's got like operators in it, that's not what I thought vectors do. But you can kind of see where it's going. It's really just, you can think of it as a memory trick, but in some sense it's a little bit deeper than that. And really when you take this triangle and you say ok let's take this triangle and you can kind of imagine multiplying it by f, really it's like an operator taking in this function and it's gonna give you another function. It's like you take this triangle and you put an f in front of it, and you can imagine, like this part gets multipled, quote unquote multiplied with f, this part gets quote unquote multiplied with f but really you're just saying you take the partial derivative with respect to x and then with y, and on and on. And the reason for doing this, this symbol comes up a lot in other contexts. There are two other operators that you're gonna learn about called the divergence and the curl. We'll get to those later, all in due time. But it's useful to think about this vector-ish thing of partial derivatives. And I mean one weird thing about it, you could say ok so this nabla symbol is a vector of partial derivative operators. What's its dimension? And it's like how many dimensions do you got? Because if you had a three-dimensional function that would mean that you should treat this like it's got three different operators as part of it. And you know I'd kinda, finish this off down here, and if you had something that was 100-dimensional it would have 100 different operators in it and that's fine. It's really just again, kind of a memory trick. So with that, that's how you compute the gradient. Not too much too it, it's pretty much just partial derivatives, but you smack em into a vector where it gets fun and where it gets interesting is with the geometric interpretation. I'll get to that in the next couple videos. It's also a super important tool for something called the directional derivative. So you've got a lot of fun stuff ahead.