If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Directional derivatives (going deeper)

A more thorough look at the formula for directional derivatives, along with an explanation for why the gradient gives the slope of steepest ascent.

## Formal definition of the directional derivative

There are a couple reasons you might care about a formal definition. For one thing, really understanding the formal definition of a new concept can make clear what it is really going on. But more importantly than that, I think the main benefit is that it gives you the confidence to recognize when such a concept can and cannot be applied.
As a warm up, let's review the formal definition of the partial derivative, say with respect to x:
start fraction, \partial, f, divided by, start color #0c7f99, \partial, x, end color #0c7f99, end fraction, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, equals, limit, start subscript, h, \to, 0, end subscript, start fraction, f, left parenthesis, x, start subscript, 0, end subscript, start color #0c7f99, plus, h, end color #0c7f99, comma, y, start subscript, 0, end subscript, right parenthesis, minus, f, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, divided by, start color #0c7f99, h, end color #0c7f99, end fraction
The connection between the informal way to read start fraction, \partial, f, divided by, \partial, x, end fraction and the formal way to read the right-hand side is as follows:
SymbolInformal understandingFormal understanding
\partial, xA tiny nudge in the x direction.A limiting variable h which goes to 0, and will be added to the first component of the function's input.
\partial, fThe resulting change in the output of f after the nudge.The difference between f, left parenthesis, x, start subscript, 0, end subscript, plus, h, comma, y, start subscript, 0, end subscript, right parenthesis and f, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, taken in the same limit as h, \to, 0.
We could instead write this in vector notation, viewing the input point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis as a two-dimensional vector
\begin{aligned} \textbf{x}_0 = \left[ \begin{array}{c} x_0 \\\\ y_0 \\\\ \end{array} \right] \end{aligned}
Here start bold text, x, end bold text, start subscript, 0, end subscript is written in bold to emphasize its vectoriness. It's a bit confusing to use a bold start bold text, x, end bold text for the entire input rather than some other letter, since the letter x is already used in an un-bolded form to denote the first component of the input. But hey, that's convention, so we go with it.
Instead of writing the "nudged" input as left parenthesis, x, start subscript, 0, end subscript, plus, h, comma, y, start subscript, 0, end subscript, right parenthesis, we write it as start bold text, x, end bold text, start subscript, 0, end subscript, plus, h, start bold text, i, end bold text, with, hat, on top, where start bold text, i, end bold text, with, hat, on top is the unit vector in the x-direction:
\begin{aligned} \dfrac{\partial f}{\partial x}(\textbf{x}_0) = \lim_{h \to 0} \dfrac{f(\textbf{x}_0 + h \hat{\textbf{i}}) - f(\textbf{x}_0)}{h} \end{aligned}
In this notation, it's much easier to see how to generalize the partial derivative with respect to x to the directional derivative along any vector start bold text, v, end bold text, with, vector, on top:
start color #0c7f99, del, start subscript, start color #e84d39, start bold text, v, end bold text, with, vector, on top, end color #e84d39, end subscript, f, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, equals, limit, start subscript, h, \to, 0, end subscript, start fraction, f, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, plus, h, start color #e84d39, start bold text, v, end bold text, with, vector, on top, end color #e84d39, right parenthesis, minus, f, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis, divided by, h, end fraction, end color #0c7f99
In this case, adding h, start bold text, v, end bold text, with, vector, on top to the input for a limiting variable h, \to, 0 formalizes the idea of a tiny nudge in the direction of start bold text, v, end bold text, with, vector, on top.
Showing directional derivative nudge

## Seeking connection between the definition and computation

Computing the directional derivative involves a dot product between the gradient del, f and the vector start bold text, v, end bold text, with, vector, on top. For example, in two dimensions, here's what this would look like:
\begin{aligned} \nabla_{\vec{\textbf{v}}} f(x, y) &= \nabla f \cdot \vec{\textbf{v}} \\\\ &= \left[ \begin{array}{c} \dfrac{\partial f}{\partial x} \\\\ \dfrac{\partial f}{\partial y} \end{array} \right] \cdot \left[ \begin{array}{c} \blueE{v_1} \\\\ \greenE{v_2} \end{array} \right] \\\\ &= \blueE{v_1} \dfrac{\partial f}{\partial x}(x, y) + \greenE{v_2} \dfrac{\partial f}{\partial y}(x, y) \end{aligned}
Here, start color #0c7f99, v, start subscript, 1, end subscript, end color #0c7f99 and start color #0d923f, v, start subscript, 2, end subscript, end color #0d923f are the components of start bold text, v, end bold text, with, vector, on top.
\begin{aligned} \vec{\textbf{v}} = \left[ \begin{array}{c} \blueE{v_1} \\ \\ \greenE{v_2} \\\\ \end{array} \right] \end{aligned}
The central question is, what does this formula have to do with the definition given above?

## Breaking down the nudge

The computation for del, start subscript, start bold text, v, end bold text, end subscript, f can be seen as a way to break down a tiny step in the direction of start bold text, v, end bold text into its x and y components.
Break apart a step along the vector h, start bold text, v, end bold text, with, vector, on top
Specifically, you can imagine the following procedure:
1. Start at some point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis.
2. Choose a tiny value h.
3. Add h, start color #0c7f99, v, start subscript, 1, end subscript, end color #0c7f99 to x, start subscript, 0, end subscript, which means stepping to the point left parenthesis, x, start subscript, 0, end subscript, plus, h, start color #0c7f99, v, start subscript, 1, end subscript, end color #0c7f99, comma, y, start subscript, 0, end subscript, right parenthesis. From what we know of partial derivatives, this will change the output of the function by about
\begin{aligned} h\blueE{v_1} \left(\dfrac{\partial f}{\partial x}(x_0, y_0) \right) \end{aligned}
• Now add h, start color #0d923f, v, start subscript, 2, end subscript, end color #0d923f to y, start subscript, 0, end subscript to bring us up/down to the point left parenthesis, x, start subscript, 0, end subscript, plus, h, start color #0c7f99, v, start subscript, 1, end subscript, end color #0c7f99, comma, y, start subscript, 0, end subscript, plus, h, start color #0d923f, v, start subscript, 2, end subscript, end color #0d923f, right parenthesis. The resulting change to f is now about
\begin{aligned} h\greenE{v_2}\left( \dfrac{\partial f}{\partial y}(x_0 + h\blueE{v_1}, y_0) \right) \end{aligned}
Adding the results of steps 3 and 4, the total change to the function upon moving from the input left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis to the input left parenthesis, x, start subscript, 0, end subscript, plus, h, start color #0c7f99, v, start subscript, 1, end subscript, end color #0c7f99, comma, y, start subscript, 0, end subscript, plus, h, start color #0d923f, v, start subscript, 2, end subscript, end color #0d923f, right parenthesis has been about
\begin{aligned} h\blueE{v_1} \left(\dfrac{\partial f}{\partial x}(x_0, y_0) \right) + h\greenE{v_2}\left( \dfrac{\partial f}{\partial y}(x_0 \redD{+ h\blueE{v_1}}, y_0) \right) \end{aligned}
This is very close to the expression for the directional derivative, which says the change in f due to this step h, start bold text, v, end bold text, with, vector, on top should be about
\begin{aligned} &\phantom{=}h \nabla_{\vec{\textbf{v}}} f(x_0, y_0) \\\\ &= h \vec{\textbf{v}} \cdot \nabla f(x_0, y_0)\\\\ &= h\blueE{v_1}\dfrac{\partial f}{\partial x}(x_0, y_0) + h\greenE{v_2}\dfrac{\partial f}{\partial y}(x_0, y_0) \end{aligned}
However, this differs slightly from the result of our step-by-step argument, in which the partial derivative with respect to y is taken at the point left parenthesis, x, start subscript, 0, end subscript, plus, h, start color #0c7f99, v, start subscript, 1, end subscript, end color #0c7f99, comma, y, start subscript, 0, end subscript, right parenthesis, not at the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis.
Luckily we are considering very, very small values of h. In fact, more technically, we should be talking about the limit as h, \to, 0. Therefore evaluating start fraction, \partial, f, divided by, \partial, y, end fraction at left parenthesis, x, start subscript, 0, end subscript, plus, h, start color #0c7f99, v, start subscript, 1, end subscript, end color #0c7f99, comma, y, start subscript, 0, end subscript, right parenthesis will be almost the same as evaluating it at left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis. Moreover, as h approaches 0, so does the difference between these two, but we have to assume that f is continuous.

## Why does the gradient point in the direction of steepest ascent?

Having learned about the directional derivatives, we can now understand why the direction of the gradient is the direction of steepest ascent.
Steepest ascent concept.
Specifically, here's the question at hand.
Setup:
• Let f be some scalar-valued multivariable function, such as f, left parenthesis, x, comma, y, right parenthesis, equals, x, squared, plus, y, squared.
• Let left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis be a particular input point
• Consider all possible directions, i.e. all unit vectors start bold text, u, end bold text, with, hat, on top in the input space of f.
Question (informal): If we start at left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis, which direction should we walk so that the output of f increases most quickly?
Question (formal): Which unit vector start bold text, u, end bold text, with, hat, on top maximizes the directional derivative along start bold text, u, end bold text, with, hat, on top?
\begin{aligned} \nabla_{\hat{\textbf{u}}} f(x_0, y_0) = \underbrace{\hat{\textbf{u}} \cdot \nabla f(x_0, y_0)}_{ \text{Maximize this quantity} } \end{aligned}
The famous triangle inequality tells us that this will be maximized by the unit vector in the direction del, f, left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis.
Maximize dot product
Notice, the fact that the gradient points in the direction of steepest ascent is a consequence of the more fundamental fact that all directional derivatives require taking the dot product with del, f.

## Want to join the conversation?

• I'm having trouble understanding the 3rd step under the formal argument. If we move hv1 in the x direction, how does this imply that the output will be hv1*fx(x0,y0)? (Sorry for the notation - I'm on my phone).
• That is the definition of the derivative. Remember:
fₓ(x₀,y₀) = lim_Δx→0 [(f(x₀+Δx,y₀)-f(x₀,y₀))/Δx]
Then, we can replace Δx with hv₁ because both Δx and h are very small, so we get:
fₓ(x₀,y₀) = (f(x₀+hv₁,y₀)-f(x₀,y₀))/hv₁
We can then rearrange this equation to get:
f(x₀+hv₁,y₀) = hv₁ × fₓ(x₀,y₀) + f(x₀,y₀)
• Can anyone help me understand how to solve the puzzle at the end of the article? I am having trouble understanding it.
• I had trouble with this puzzle too, but then I thought about it in terms of vectors. We need to maximize 100A + 20B + 2C, right? By definition of the dot product, this expression is equal to the dot product of two vectors [100, 20, 2] * [A, B, C]. So we want to maximize the dot product. When does the dot product have the maximum value? It is maximum when two vectors are parallel, or, in other words, one vector is multiple of the other (this can be understood from the graphical interpretation of the dot product). Therefore, our vector [A, B, C] should be [100x, 20x, 2x], where x is some number.

The second insight is to express A^2 + B^2 + C^2 = 10404 equation in vector notation. Expression A^2 + B^2 + C^2 is equal to the dot product of vector [A,B,C] with itself:

[A,B,C]*[A,B,C] = 10404.

Here instead of [A,B,C] we substitute [100x, 20x, 2x] vector and solve for x:

[100x, 20x, 2x]* [100x, 20x, 2x] = 10000x^2 + 400x^2 + 4x^2 = 10404x^4 = 10404.
x = 1.

Therefore, our vector [A,B,C] is [100 * 1, 20 * 1, 2 * 1] = [100, 20, 2].

A = 100
B = 20
C = 2.

I hope it wasn't confusing. :)
• step 4. how does adding the change in the x direction and y direction give us the total change in the function? we are adding "perpendicular numbers", not vectors, so the adding should look more like pythagoras, shouldn't it?
The function must be locally be essentially linear, i.e., there must be a linear approximation
L(x)=f(a)+Df(a)(x−a)
That ignorance and directly add make sense for L(x).
Detailed info, you can refer to :
• Isn't it good to color "h" in black in the figure just below the subtitle of "Breaking down the nudge" for the consistency? Just a suggestion.
• If you meant that we should bold h like 𝐡, then this is my answer: No, because h is a scalar value, and we're taking the limit as h approaches the (scalar value) zero. This corresponds to the nudge in the input, which is the vector h𝐯, approaching the vector value 𝟎. h𝐯 is equal to the input nudge direction 𝐯 scaled (multiplied) by the step size h.
• I don't understand this sentence: The famous triangle inequality tells us that this will be maximized by the unit vector in the direction (nabla) f (x0, y0)

To me, this doesn't seem obvious at all, can I find explanation perhaps somewhere else on the khan academy? I have looked at triangle inequality ||x + y|| <= ||x||+||y|| , but I don't understand how the two things are related, other than that both somehow talk about vectors. Directional derivative however works with dot products and not adding vectors together, as in the triangle inequality, so I don't immediately see the connection between the two.
• I'm in the same boat and don't see how the triangle inequality can be applied, however using the slightly different Cauchy-Schwarz inequality works. The Cauchy-Scwarz inequality states that
x·y <= ||x|| * ||y||
for any two vectors x and y. Cauchy-Schwarz also says that the inequality can be turned to an equality
x · y =  ||x|| * ||y||
if x and y are parallel.
• Regarding directional derivative I can not understand why the vector v has not to be a unit vector, namely, it can has an arbitrary magnitude.

In the one hand, the formal definition of the directional derivative supplied in the article lies upon unit vectors i and j. From my point of view, it is easy to understand this definition will hold for any direction other than i and j always that we treat with unit vectors. Another thing, and this is what I can not figure out clearly, is wether the magnitud of these vectors can also be generalized. To illustrate what it shocks me, if we set a limit from 0 to infinity for the magnitud of v, the formula will ends up with an indetermination of the form h·||v||=0·Inf, because h goes to 0 and ||v|| goes to Inf. In contrast, if we restrict ourselves into unit vectors this would not happen.

On the other hand, as far as I understood we are trying to get a measure that tells us how f(x, y) changes in the direction of v, which has nothing to do with how far we move in that direction. Therefore the only way I can figure out to capture just the variation due to change in direction (and nothing else) is "playing" with number 1 (i.e. unit vectors) whenever we multiply.
• Hi, i've a question on the following point "Computing the directional derivative involves a dot product between the gradient and the vector v". when i look at the definition of dot product, it says "|a|.|b|.cosine theeta".
but in this definition no cosine theeta is involved. is that mean, the gradient (which has the partial derivatives of x,y) not considered as a vector here? or is it? if yes,why cosine theeta is not involved here?
• There's an error at the formal definition part. You say dx goes to zero. That means df/dx is undefined?
(1 vote)
• It is not undefined because the definition says that dx approaches zero, so it is never really zero. That is why you take the limit.
• I have more of a conceptual question. If we think a partial derivative as a little nudge to a certain direction and that nudge (h) is approaching 0, why does that concept not transfer to directional derivative? Namely, why would the directional derivative of 2v twice as big as that of v? If we think of the nudge in v as so tiny that goes to 0, why would 2v versus v even matter?
• I think this part may be wrong (or at least I don't fully understand it):

"However, this differs slightly from the result of our step-by-step argument, in which the partial derivative with respect to y is taken at the point (x_0 + hv_1, y_0) not at the point (x_0, y_0)"

I would think we could treat the change in y just like we treated the change in x -- they are in essence happening at the same time -- and how would you chose the order anyway?
(1 vote)
• The thing is that lim ℎ→0 𝑥₀ + ℎ𝑣₁ = 𝑥₀

With 𝒗 = (𝑣₁, 𝑣₂)
𝛻𝒗 𝑓(𝑥₀, 𝑦₀) = lim ℎ→0 [𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀ + ℎ𝑣₂) − 𝑓(𝑥₀, 𝑦₀)]∕ℎ

= lim ℎ→0 [𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀ + ℎ𝑣₂) − 𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀) + 𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀) − 𝑓(𝑥₀, 𝑦₀)]∕ℎ

= lim ℎ→0 [𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀ + ℎ𝑣₂) − 𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀)]∕ℎ
+ lim ℎ→0 [𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀) − 𝑓(𝑥₀, 𝑦₀)]∕ℎ

= lim ℎ→0 (𝜕∕𝜕𝑦[𝑓(𝑥₀ + ℎ𝑣₁, 𝑦₀)]) + 𝜕∕𝜕𝑥[𝑓(𝑥₀, 𝑦₀)]

= 𝜕∕𝜕𝑦[𝑓(𝑥₀, 𝑦₀)]) + 𝜕∕𝜕𝑥[𝑓(𝑥₀, 𝑦₀)]