If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Directional derivatives (going deeper)

A more thorough look at the formula for directional derivatives, along with an explanation for why the gradient gives the slope of steepest ascent.

Background:

This article is targetted for those who want a deeper understanding of the directional derivative and its formula.

Formal definition of the directional derivative

There are a couple reasons you might care about a formal definition. For one thing, really understanding the formal definition of a new concept can make clear what it is really going on. But more importantly than that, I think the main benefit is that it gives you the confidence to recognize when such a concept can and cannot be applied.
As a warm up, let's review the formal definition of the partial derivative, say with respect to x:
fx(x0,y0)=limh0f(x0+h,y0)f(x0,y0)h
The connection between the informal way to read fx and the formal way to read the right-hand side is as follows:
SymbolInformal understandingFormal understanding
xA tiny nudge in the x direction.A limiting variable h which goes to 0, and will be added to the first component of the function's input.
fThe resulting change in the output of f after the nudge.The difference between f(x0+h,y0) and f(x0,y0), taken in the same limit as h0.
We could instead write this in vector notation, viewing the input point (x0,y0) as a two-dimensional vector
x0=[x0y0]
Here x0 is written in bold to emphasize its vectoriness. It's a bit confusing to use a bold x for the entire input rather than some other letter, since the letter x is already used in an un-bolded form to denote the first component of the input. But hey, that's convention, so we go with it.
Instead of writing the "nudged" input as (x0+h,y0), we write it as x0+hi^, where i^ is the unit vector in the x-direction:
fx(x0)=limh0f(x0+hi^)f(x0)h
In this notation, it's much easier to see how to generalize the partial derivative with respect to x to the directional derivative along any vector v:
vf(x0)=limh0f(x0+hv)f(x0)h
In this case, adding hv to the input for a limiting variable h0 formalizes the idea of a tiny nudge in the direction of v.
Showing directional derivative nudge

Seeking connection between the definition and computation

Computing the directional derivative involves a dot product between the gradient f and the vector v. For example, in two dimensions, here's what this would look like:
vf(x,y)=fv=[fxfy][v1v2]=v1fx(x,y)+v2fy(x,y)
Here, v1 and v2 are the components of v.
v=[v1v2]
The central question is, what does this formula have to do with the definition given above?

Breaking down the nudge

The computation for vf can be seen as a way to break down a tiny step in the direction of v into its x and y components.
Break apart a step along the vector hv
Specifically, you can imagine the following procedure:
  1. Start at some point (x0,y0).
  2. Choose a tiny value h.
  3. Add hv1 to x0, which means stepping to the point (x0+hv1,y0). From what we know of partial derivatives, this will change the output of the function by about
hv1(fx(x0,y0))
  • Now add hv2 to y0 to bring us up/down to the point (x0+hv1,y0+hv2). The resulting change to f is now about
hv2(fy(x0+hv1,y0))
Adding the results of steps 3 and 4, the total change to the function upon moving from the input (x0,y0) to the input (x0+hv1,y0+hv2) has been about
hv1(fx(x0,y0))+hv2(fy(x0+hv1,y0))
This is very close to the expression for the directional derivative, which says the change in f due to this step hv should be about
=hvf(x0,y0)=hvf(x0,y0)=hv1fx(x0,y0)+hv2fy(x0,y0)
However, this differs slightly from the result of our step-by-step argument, in which the partial derivative with respect to y is taken at the point (x0+hv1,y0), not at the point (x0,y0).
Luckily we are considering very, very small values of h. In fact, more technically, we should be talking about the limit as h0. Therefore evaluating fy at (x0+hv1,y0) will be almost the same as evaluating it at (x0,y0). Moreover, as h approaches 0, so does the difference between these two, but we have to assume that f is continuous.

Why does the gradient point in the direction of steepest ascent?

Having learned about the directional derivatives, we can now understand why the direction of the gradient is the direction of steepest ascent.
Steepest ascent concept.
Specifically, here's the question at hand.
Setup:
  • Let f be some scalar-valued multivariable function, such as f(x,y)=x2+y2.
  • Let (x0,y0) be a particular input point
  • Consider all possible directions, i.e. all unit vectors u^ in the input space of f.
Question (informal): If we start at (x0,y0), which direction should we walk so that the output of f increases most quickly?
Question (formal): Which unit vector u^ maximizes the directional derivative along u^?
u^f(x0,y0)=u^f(x0,y0)Maximize this quantity
The famous triangle inequality tells us that this will be maximized by the unit vector in the direction f(x0,y0).
Maximize dot product
Notice, the fact that the gradient points in the direction of steepest ascent is a consequence of the more fundamental fact that all directional derivatives require taking the dot product with f.

Want to join the conversation?

  • blobby green style avatar for user Rich Rasta
    I'm having trouble understanding the 3rd step under the formal argument. If we move hv1 in the x direction, how does this imply that the output will be hv1*fx(x0,y0)? (Sorry for the notation - I'm on my phone).
    (24 votes)
    Default Khan Academy avatar avatar for user
    • ohnoes default style avatar for user Tejas
      That is the definition of the derivative. Remember:
      fₓ(x₀,y₀) = lim_Δx→0 [(f(x₀+Δx,y₀)-f(x₀,y₀))/Δx]
      Then, we can replace Δx with hv₁ because both Δx and h are very small, so we get:
      fₓ(x₀,y₀) = (f(x₀+hv₁,y₀)-f(x₀,y₀))/hv₁
      We can then rearrange this equation to get:
      f(x₀+hv₁,y₀) = hv₁ × fₓ(x₀,y₀) + f(x₀,y₀)
      (31 votes)
  • aqualine ultimate style avatar for user 98angelpadilla
    Can anyone help me understand how to solve the puzzle at the end of the article? I am having trouble understanding it.
    (5 votes)
    Default Khan Academy avatar avatar for user
    • marcimus pink style avatar for user Evgenii Neumerzhitckii
      I had trouble with this puzzle too, but then I thought about it in terms of vectors. We need to maximize 100A + 20B + 2C, right? By definition of the dot product, this expression is equal to the dot product of two vectors [100, 20, 2] * [A, B, C]. So we want to maximize the dot product. When does the dot product have the maximum value? It is maximum when two vectors are parallel, or, in other words, one vector is multiple of the other (this can be understood from the graphical interpretation of the dot product). Therefore, our vector [A, B, C] should be [100x, 20x, 2x], where x is some number.

      The second insight is to express A^2 + B^2 + C^2 = 10404 equation in vector notation. Expression A^2 + B^2 + C^2 is equal to the dot product of vector [A,B,C] with itself:

      [A,B,C]*[A,B,C] = 10404.

      Here instead of [A,B,C] we substitute [100x, 20x, 2x] vector and solve for x:

      [100x, 20x, 2x]* [100x, 20x, 2x] = 10000x^2 + 400x^2 + 4x^2 = 10404x^4 = 10404.
      x = 1.

      Therefore, our vector [A,B,C] is [100 * 1, 20 * 1, 2 * 1] = [100, 20, 2].

      So, we have our answer:
      A = 100
      B = 20
      C = 2.

      I hope it wasn't confusing. :)
      (66 votes)
  • piceratops seedling style avatar for user vyletel
    I don't understand this sentence: The famous triangle inequality tells us that this will be maximized by the unit vector in the direction (nabla) f (x0, y0)

    To me, this doesn't seem obvious at all, can I find explanation perhaps somewhere else on the khan academy? I have looked at triangle inequality ||x + y|| <= ||x||+||y|| , but I don't understand how the two things are related, other than that both somehow talk about vectors. Directional derivative however works with dot products and not adding vectors together, as in the triangle inequality, so I don't immediately see the connection between the two.
    (3 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user JamesCagalawan
      I'm in the same boat and don't see how the triangle inequality can be applied, however using the slightly different Cauchy-Schwarz inequality works. The Cauchy-Scwarz inequality states that
      x·y <= ||x|| * ||y||
      for any two vectors x and y. Cauchy-Schwarz also says that the inequality can be turned to an equality
      x · y =  ||x|| * ||y||
      if x and y are parallel.
      (7 votes)
  • blobby green style avatar for user prashantcomsc
    For a given gradient vector, I can understand that any unit vector in the direction of gradient vector will give maximum value of dot product between itself and gradient vector.
    But how does it proves the gradient vector is itself the direction of maximum ascent.
    (4 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user sm2701
      I don’t think that quite matters. If a unit vector in the direction of the gradient vector is the direction of greatest ascent, then moving in the direction of the gradient vector is also in the direction of the greatest ascent. One is just a multiple of another—they still pointin the same direction.
      (2 votes)
  • blobby green style avatar for user stallionsri
    Hi, i've a question on the following point "Computing the directional derivative involves a dot product between the gradient and the vector v". when i look at the definition of dot product, it says "|a|.|b|.cosine theeta".
    but in this definition no cosine theeta is involved. is that mean, the gradient (which has the partial derivatives of x,y) not considered as a vector here? or is it? if yes,why cosine theeta is not involved here?
    (3 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user jaeyungkim12
    Isn't it good to color "h" in black in the figure just below the subtitle of "Breaking down the nudge" for the consistency? Just a suggestion.
    (3 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user saajidchowdhury
      If you meant that we should bold h like 𝐡, then this is my answer: No, because h is a scalar value, and we're taking the limit as h approaches the (scalar value) zero. This corresponds to the nudge in the input, which is the vector h𝐯, approaching the vector value 𝟎. h𝐯 is equal to the input nudge direction 𝐯 scaled (multiplied) by the step size h.
      (4 votes)
  • blobby green style avatar for user Taras.Pokalchuk
    step 4. how does adding the change in the x direction and y direction give us the total change in the function? we are adding "perpendicular numbers", not vectors, so the adding should look more like pythagoras, shouldn't it?
    (3 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Yixuan Liu
    I have more of a conceptual question. If we think a partial derivative as a little nudge to a certain direction and that nudge (h) is approaching 0, why does that concept not transfer to directional derivative? Namely, why would the directional derivative of 2v twice as big as that of v? If we think of the nudge in v as so tiny that goes to 0, why would 2v versus v even matter?
    (4 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Victrix
      To gain an intuitive understanding of that particular concept, you'd need to consider how the single variable calculus differentiation process deals with constant multipliers. For example, dy/dx (y = c*f(x)) gives dy = c*f'(x)dx for all constants c.

      In the same respect, the scaling factor of the directional vector v is taken into consideration. Note that while we're dealing with something approaching 0, it doesn't ever quite get there.
      (1 vote)
  • blobby green style avatar for user Scott Edwards
    I think this part may be wrong (or at least I don't fully understand it):

    "However, this differs slightly from the result of our step-by-step argument, in which the partial derivative with respect to y is taken at the point (x_0 + hv_1, y_0) not at the point (x_0, y_0)"

    I would think we could treat the change in y just like we treated the change in x -- they are in essence happening at the same time -- and how would you chose the order anyway?
    (2 votes)
    Default Khan Academy avatar avatar for user
    • cacteye blue style avatar for user Jerry Nilsson
      The thing is that lim ℎ→0 𝑥₀ + ℎ𝑣₁ = 𝑥₀

      With 𝒗 = (𝑣₁, 𝑣₂)
      𝛻𝒗 𝑓(𝑥₀, 𝑦₀) = lim ℎ→0 [𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀ + ℎ𝑣₂) − 𝑓(𝑥₀, 𝑦₀)]∕ℎ

      = lim ℎ→0 [𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀ + ℎ𝑣₂) − 𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀) + 𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀) − 𝑓(𝑥₀, 𝑦₀)]∕ℎ

      = lim ℎ→0 [𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀ + ℎ𝑣₂) − 𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀)]∕ℎ
      + lim ℎ→0 [𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀) − 𝑓(𝑥₀, 𝑦₀)]∕ℎ

      = lim ℎ→0 (𝜕∕𝜕𝑦[𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀)]) + 𝜕∕𝜕𝑥[𝑓(𝑥₀, 𝑦₀)]

      = 𝜕∕𝜕𝑦[𝑓(𝑥₀, 𝑦₀)]) + 𝜕∕𝜕𝑥[𝑓(𝑥₀, 𝑦₀)]
      (3 votes)
  • leafers ultimate style avatar for user roger.llrt
    Regarding directional derivative I can not understand why the vector v has not to be a unit vector, namely, it can has an arbitrary magnitude.

    In the one hand, the formal definition of the directional derivative supplied in the article lies upon unit vectors i and j. From my point of view, it is easy to understand this definition will hold for any direction other than i and j always that we treat with unit vectors. Another thing, and this is what I can not figure out clearly, is wether the magnitud of these vectors can also be generalized. To illustrate what it shocks me, if we set a limit from 0 to infinity for the magnitud of v, the formula will ends up with an indetermination of the form h·||v||=0·Inf, because h goes to 0 and ||v|| goes to Inf. In contrast, if we restrict ourselves into unit vectors this would not happen.

    On the other hand, as far as I understood we are trying to get a measure that tells us how f(x, y) changes in the direction of v, which has nothing to do with how far we move in that direction. Therefore the only way I can figure out to capture just the variation due to change in direction (and nothing else) is "playing" with number 1 (i.e. unit vectors) whenever we multiply.
    (3 votes)
    Default Khan Academy avatar avatar for user