Main content
Multivariable calculus
Course: Multivariable calculus > Unit 3
Lesson 4: Optimizing multivariable functions (articles)Reasoning behind second partial derivative test
For those of you who want to see why the second partial derivative works, I cover a sketch of a proof here.
Background
In the last article, I gave the statement of the second partial derivative test, but I only gave a loose intuition for why it's true. This article is for those who want to dig a bit more into the math, but it is not strictly necessary if you just want to apply the second partial derivative test.
What we're building to
- To test whether a stable point of a multivariable function is a local minimum/maximum, take a look at the quadratic approximation of the function at that point. It is easier to analyze whether this quadratic approximation has maximum/minimum.
- For two-variable functions, this boils down to studying expression that look like this:These are known as quadratic forms. The rule for when a quadratic form is always positive or always negative translates directly to the second partial derivative test.
Single variable case via quadratic approximation
First, I'd like to walk through the formal reasoning behind why the single-variable second derivative test works. By formal, I mean capturing the idea of concavity into more of an airtight argument.
In single-variable calculus, when for some function and some input , here's what the second derivative test looks like:
has a local maximum at if has a local minimum at if- If
, the second derivative alone cannot determine whether has a maximum, minimum or inflection point at .
To think about why this test works, start by approximating the function with a taylor polynomial out to the quadratic term, also known as a quadratic approximation.
Since , this quadratic approximation simplifies like this:
Notice, for all possible since squares are always positive or zero. That simple fact tells us everything we need to know! Why?
It means that when , we can read our approximation like this:
Therefore is a local minimum of our approximation. In fact, it is a global minimum, but we only care about the fact that it is a local minimum. When the quadratic approximation of a function has a local minimum at the point of approximation, the function itself must also have a local minimum there. I'll say more on this in the last section, but for now the intuition should be clear since the function and its approximation "hug" one another around the point of approximation .
Similarly, if , we can read the approximation as
In this case, the approximation has a local maximum at , indicating that the function itself also has a local maximum there.
When , our quadratic approximation always equals the constant , meaning our function is in some sense too flat to be analyzed by the second derivative alone.
What to take away from this:
When , studying whether has a local maximum or minimum at comes down to whether the quadratic term of the Taylor approximation is always positive or always negative.
Two variable case, visual warmup
Now suppose you have a function with two inputs and one output, and you find a stable point. That is, a point where both its partial derivatives are ,
which is more succinctly written as
In order to determine whether this is a local maximum, local minimum, or neither, we look to it's quadratic approximation. Let's start with a visual preview of what we want to do:
will have a local minimum at a stable point if the quadratic approximation at that point is a concave-up paraboloid. will have local maximum there if the quadratic approximation is a concave down paraboloid:- If the quadratic approximation is saddle-shaped,
has neither a maximum nor a minimum, but a saddle point. - If the quadratic approximation is flat in one or all directions, we do not have enough information to make conclusions about
.
Analyzing the quadratic approximation
The formula for the quadratic approximation of , in vector form, looks like this:
Since we care about points where the gradient is zero, we can get rid of that gradient term
To see this spelled out for the two-variable case, let's expand out the Hessian term,
(Note, if this approximation or any of the notation feels shaky or unfamiliar, consider reviewing the article on quadratic approximations).
As I showed with the single variable case, the strategy is to study if the quadratic term of this approximation is always positive or always negative.
Right now, this term is a lot to write down, but we can distill its essence by studying expressions of the following form:
Such expressions are often fancifully called "quadratic forms".
- The word "quadratic" indicates that the terms are of order two, meaning they involve the product of two variables.
- The word "form" always threw me off here, and it makes the idea of a quadratic form sound more complicated than it really is. Mathematicians say "quadratic form" instead of "quadratic expression" to emphasize that all terms are of order
, and there are no linear or constant terms mucking up the expression. A phrase like "purely quadratic expression" would have been much too reasonable and understandable to adopt.
To make the notation for quadratic forms easier to generalize into higher dimensions, they are often written with respect to a symmetric matrix
Here is the crucial question:
- How can we tell whether the expression
is always positive, always negative, or neither, just by analyzing the constants , and ?
Analyzing quadratic forms
If we plug in a constant value for , we get some single variable quadratic function:
The graph of this function is a parabola, and it will only cross the -axis if this quadratic function has real roots.
Otherwise, it either stays entirely positive or entirely negative, depending on the sign of .
We can apply the quadratic formula to this expression to see whether it's roots are real or complex.
- The leading term is
. - The linear term is
. - The constant term is
Applying the quadratic formula looks like this:
If , the quadratic has a double root at , meaning the parabola barely kisses the -axis at that point. Otherwise, whether or not these roots are real depends only on the sign of the expression .
- If
, there are real roots, so the graph of crosses the -axis. - Otherwise, if
, there are no real roots, so the graph of either stays entirely positive or entirely negative.
For example, consider the case
In this case, , so the graph of always crosses the -axis. Here is a video showing how that graph moves around as we let the value of slowly change.
This corresponds with the fact that the graph of can be both positive and negative.
In contrast, consider the case
Now, . This means the graph of never crosses the -axis, although it kisses it if the constant is zero. Here is a video showing how that graph changes as we let the constant vary:
This corresponds with the fact that the multivariable function is always positive.
Rule for the sign of quadratic forms
As if to confuse students who are familiar with the quadratic formula, rules regarding quadratic forms are often phrased with respect to instead of . Since one is the negative of the other, this requires switching when you say and when you say . The reason mathematicians prefer is because this is the determinant of the matrix describing the quadratic form:
As a reminder, this is how the quadratic form looks using the matrix.
Tying this convention together with what we found in the previous section, we write the rule for the sign of a quadratic form as follows:
- If
, the quadratic form can attain both positive and negative values, and it's possible for it to equal at values other than . - If
the form is either always positive or always negative depending on the sign of , but in either case it only equals at .- If
, the form is always positive, so is a global minimum point of the form.
- If
, the form is always negative, so is a global maximum point of the form.
- If
- If
, the form will again either be always positive or always negative, but now it's possible for it to equal at values other than
Some terminology:
When for all other than , the quadratic form and the matrix associated with it are both called positive definite.
When for all other than , they are both negative definite.
If you replace the and with and , the corresponding properties are positive semi-definite and negative semi-definite.
Applying this to
Okay zooming back out to where we started, let's write down our quadratic approximation again:
The quadratic portion of is written with respect to and instead of simply and , so everywhere where the rule for the sign of quadratic forms references the point , we apply it instead to the point .
As with the single-variable case, when the quadratic approximation has a local maximum (or minimum) at , it means has a local maximum (or minimum) at that point. This means we can translate the rule for the sign of a quadratic form directly to get the second derivative test:
Suppose , then
- If
, has a neither minimum nor maximum at , but instead has a saddle point. - If
, definitely has either a maximum or minimum at , and we must look at the sign of to figure out which one it is.- If
, has a local minimum.
- If
, has a local maximum.
- If
, the second derivatives alone cannot tell us whether has a local minimum or maximum.
Our current tools are lacking
Everything presented here almost constitutes a full proof, except for one final step.
Intuitively, it might make sense that when a quadratic approximation bends and curves in a certain way, the function should bend and curve in that same way near the point of approximation. But how do we formalize this beyond intuition?
Unfortunately, we will not do that here. Making arguments about derivatives fully rigorous requires using real analysis, the theoretical backbone of calculus.
Furthermore, you might be wondering how this generalizes to functions with more than two inputs. There is a notion of quadratic forms with multiple variables, but phrasing the rule for when such forms are always positive or always negative uses various ideas from linear algebra.
Summary
- To test whether a stable point of a multivariable function is a local minimum/maximum, take a look at the quadratic approximation of the function at that point. It is easier to analyze whether this quadratic approximation has maximum/minimum.
- For two-variable functions, this boils down to studying expression that look like this:These are known as quadratic forms. The rule for when a quadratic form is always positive or always negative translates directly to the second partial derivative test.
Want to join the conversation?
- Can khan academy add videos/articles on real analysis? Perhaps add a subtopic to the mathematics topic called "Analysis", with sub-subtopics on real, complex and functional analysis.(85 votes)
- When he says "If ac-b2<0, then the quadratic form can attain both positive and negative values, and is zero only at
(x,y)=(0,0)" I am not sure I understand, because this implies that the form has real solutions for any y value. So there should be other points for which the form = 0.(8 votes) - Why is that when fxx*fyy - fxy^2 = 0 then the second derivatives alone cannot tell us whether f has a local minimum or maximum?
How does this relate to the aforementioned statement that if ac- b^2 = 0, the form will again either be always positive or always negative, but now it's possible for it to equal 0 at values other than (x, y) = (0, 0).
I am having trouble figuring this out. My strategy is to figure out what the parabola looks like when ac-b^2 = 0 (or f'xx*f'yy - f'xy^2). But I don't understand intuitively or analytically the implications that "now it's possible for it to equal 0 at values other than (x, y) = (0, 0)."
Any help will be appreciated.(4 votes)- Hey Sam,
You have to look at the whole function to understand this. We are saying that for a particular y, the zeros of the approximating function at (x, y) [where x is a variable] will occur at x = y*((-b - sqrt(b^2 - ac))/2a). If ac-b^2 = b^2 - ac = 0, then the only zeros of the function will occur at x = 0.
For that to be the case, the function can NEVER cross the z axis. Why is that? It's hard to prove without using real analysis, but you can definitely picture it. We know for a fact that every zero of this function has to be on a line at x = 0. Now imagine that the function crosses the x axis. Then there's some point where the function is zero. By the thing we proved earier , it has to cross at x = 0. Now, we have to use the fact that the function that we're talking about is quadratic, and is kind of shaped like a sombrero. So if it crosses the z axis, the set of points where z=0 is a circular shape in the x-y plane. However, there's no circle that can fit on the line x = 0.(2 votes)
- Would it then be right to conclude that for f(x,y,z) to have a local min/max at (x0,y0,z0) you need
fxxfyy + fxxfzz + fyyfzz - fxy² - fyz² - fxz²
to be greater than 0, since for f to have an min/max you need a min/max over the xy-plane (which would be accounted for by z=0, y=y0, x=variable, then the same reasoning as in the article to get fxxfyy - fxy² > 0), as well as over the yz-plane (fyyfzz - fyz² > 0) and the xz-plane (fxxfzz - fxz² > 0), and adding all of those together gives us
fxxfyy + fyyfzz + fxxfzz - fxy² - fyz² - fxz² > 0
, or would that only be analogous to having fxx and fyy be greater/smaller than 0 and there is still information missing?(2 votes) - I cannot understand why, when the quadratic form can be both negative and positive means that the function must have a saddle.
"This corresponds with the fact that the graph of
f(x,y) = x^2 + 6xy + 5y^2" No matter how many times I read this section, I don't really get how this makes the graph a saddle at the point?(2 votes)- I was stuck on this a while too. If you watch the videos and only focus on the vertex of the parabola, you will notice it traces a parabola as "b" (yO) varies. Now notice how, in the first case, the parabola that the vertex traces has opposite concavity to the graphed parabola. This corresponds to a saddle. But in the second case, the traced parabola has the same concavity as the graphed function, and so the 3d graph is a paraboloid. You can sort of imagine the 2d graph to just be a slice of the graph along the axis of the constant variable.
Now, the vertex of the graphed parabola must always pass through the origin since there are no constant terms. This also holds true for the traced graph, since it's really just a projection of the the function with x made constant instead of y. You can check all of this with this graph: https://www.desmos.com/calculator/qpp8ujuhvc.
This all means that the vertex of the parabola cannot cross the x axis: if the vertex is above y=0 it can only be above y=0.
Putting this all together, if the graphed parabola's vertex is above y=0 at any point, than you know the parabola it traces will be concave up like a cup, and concave down like a frown if the vertex is below y=0.
Now, if a vertex is below y=0 then the only way it can have real x axis intercepts is if it's parabola is concave up. So the only way a function could have real x-axis intercepts in this case is if the concavities of graphed and traced are opposite each other, because the vertex being below y=0 inherently means the the traced parabola is concave down but requires the graphed parabola be concave up. So by checking if the parabola has real roots, we can cleverly check if the parabolas are opposite each other and therefore whether the 3d quadratic approximation is a saddle or a paraboloid.(1 vote)
- does there exist videos/articles on limits and continuity in 3d?(2 votes)
- Why does
"the quadratic form can attain both positive and negative values, and it's possible for it to equal 0 at values other than (x, y) = (0, 0)(x,y)=(0,0)"
=
"f has a neither minimum nor maximum at (x_0, y_0)(x_0 ,y_0), but instead has a saddle point"?
I'm missing the implicit connection here.(2 votes) - At the beginning of the section titled "Analyzing Quadratic Forms," why can we plug in a constant value y0 for y?(1 vote)
- As I understand it, it's just a conceptual trick. You could just as well plug in a constant value xO for x and you'd get the same result.
More specifically, yO is a constant in the instance the equation is describing, but you can still vary yO. That's the point of the videos above where yO is varied. (their titles seem to be mislabeled, saying b varies when really yO varies). What you then find is that, no matter how many different numbers you try setting yO to, the graph might sometimes only ever be positive or negative, depending on a,b, and c. This idea is captured by using the quadratic formula, see above.(2 votes)
- This article raised a few questions for me:
1) If a point p in R^k is not a critical point of a scalar-(or vector-)valued function f (i.e., the gradient of f at p is not 0), yet the quadratic approximation does have a local extremum at p, do we still have that f has a local extremum at p?
2) Does a scalar- or vector-valued function f defined on R^k having a local extremum at a point p in R^k always (or at least sometimes) imply that its quadratic approximation has a local minimum there?
3) In the context of this article, if b^2-ac > 0, how do we know the only zero of the quadratic form is at (0, 0)?
4) Again in the context of this article, why does b^2-4ac=0 mean both that the quadratic form will always be positive or always be negative and that it can have zeros at points other than (0, 0)?(1 vote) - Can this test be arrived at by interpreting the Hessian matrix as the Jacobian matrix of the gradient field and then finding an interpretation for the determinant? Maybe the eigen-values of the Hessian could be used somehow.(1 vote)