Learn how to test whether a function with two inputs has a local maximum or minimum.
Not strictly necessary, but used in one section:
Also, if you are a little rusty on the second derivative test from single-variable calculus, you might want to quickly review it here since it's a good comparison for the second partial derivative test.
The second partial derivative test tells us how to verify whether this stable point is a local maximum, local minimum, or a saddle point. Specifically, you start by computing this quantity:
Then the second partial derivative test goes as follows:
- If , then is a saddle point.
- If , we do not have enough information to tell.
Focus first on this term:
You can think of it as cleverly encoding whether or not the concavity of 's graph is the same in both the and directions.
For example, look at the function
This function has a saddle point at . The second partial derivative with respect to is a positive constant:
In particular, , and the fact that this is positive means looks like it has upward concavity as we travel in the -direction. On the other hand, the second partial derivative with respect to is a negative constant:
This indicates downward concavity as we travel in the -direction. This mismatch means we must have a saddle point, and it is encoded as the product of the two second partial derivatives:
Since can only be positive, subtracted it will only make the full expression more negative.
On the other hand, when the signs of and are either both positive or both negative, the and directions agree about what the concavity of should be. In either of these cases, the term will be positive.
But this is not enough!
Consider the function
where is some constant.
Concept check: With this definition of , compute its second derivatives:
Because the second derivatives and are both positive, the graph will appear concave up as we travel in either the pure direction or the pure direction (no matter what is).
However, watch the following video where we show how this graph changes as we let the constant vary from to , then back to :
What's going on here? How can the graph have a saddle point even though it is concave up in both the and directions? The short answer is that other directions matter too, and in this case, they are captured by the term .
For example, if we isolate this term and look at the graph of , here's what it looks like:
It has a saddle point at . This is not because the and directions disagree about concavity, but instead because the concavity appears positive along the diagonal direction and negative in the direction .
Let's see what the second derivative test tells us about the function . Using the values for the second derivatives you were asked to compute above, Here's what we get:
When , this is negative, so has a saddle point. When , it is positive, so has a local minimum.
You can think of the quantity as measuring how much the function looks like the graph of near the point .
Considering how many directions have to agree with each other, it is actually quite surprising that we only need to consider three values, , and .
The next article gives more detailed reasoning behind the second partial derivative test.
- Once you find a point where the gradient of a multivariable function is the zero vector, meaning the tangent plane of the graph is flat at this point, the second partial derivative test is a way to tell if that point is a local maximum, local minimum, or a saddle point.
- The key term of the second partial derivative test is this:
- If , the function definitely has a local maximum/minimum at the point .
- If , it is a minimum.
- If , it is a maximum.
- If , the function definitely has a saddle point at .
- If , there is not enough information to tell.
Want to join the conversation?
- We often get the type of problem on our exams where a point (x,y) gives H=0.
We are then told to use the "definition" of a saddle point to check if this is the case. My teacher used an example where the point was (0,0)=0, and the function f(x,y) = x^2
y + xy^3 + xy^2. He then replaced (0,0) --> (a,a), which in turn made f(a,a)= a^3(a+2) and chose a just above and below zero. Since f>0 when a>0 and f<0 when a<0, the conclusion was that the point (0,0) was a saddle point.
If you've made it this far, I applaud you.
Now for my question: If we apply the same test to a max, would BOTH "(a,a)" then gives us values just below the value of f at the actual point?
Likewise, would the test give two values just ABOVE the value of f at the actual point, if the point was a minimum?
This seems right to me. If the graph looked like a traffic cone, all points below the max would compute smaller values of f, right?(12 votes)
- How can we apply the second partial derivative test for functions with more than 2 variables (like f(x,y,z))?(5 votes)
- How do I find the second partial derivatives for a function with 3 variables and how does this test work for that? Thanks :)(4 votes)
- You actually need to look at the eigenvalues of the Hessian Matrix, if they are all positive, then there is a local minimum, if they are all negative, there is a local max, and if they are of different signs, then there is a saddle.(2 votes)
- What do you do if the second partial derivative still has variables in it? How do you know if fxx or fyy is positive or negative?(1 vote)
- If the second partial derivative is dependent on x and y, then it is different for different x and y. fxx(0, 0) is different from fxx(1, 0) which is different from fxx(0, 1) and fxx(1, 1) and so on. There's nothing wrong with that. You need to decide which point you care about and plug in the x and y values.
Recall that was also the case with the second derivative test in single var calculus. You calculate the first or second derivative at some point.(4 votes)
- What if H>0 but fxx=0?(1 vote)
- This isn't possible
Since fxy=fyx, (fxy)*(fyx) must be positive or 0. If fxx=0, then (fxx)*(fyy)=0. The determinant H is (fxx)*(fyy)- (fxy)*(fyx), so if fxx=0, then H cannot be greater than 0.
Hope this helps :)(4 votes)
- Is the formula for the second partial derivative test (fxx*fyy - (fxy)^2) just the determinant of the Hessian matrix learned earlier in this section?(2 votes)
- In a way, yes. It is a boiled-down version of the Hessian's determinant that excludes non-negative numerical factors.(1 vote)
- We were taught that the test actually was H = (f_12)^2 - (f_11)(f_22) and that if H > 0, the point is a saddle point and of course the opposite for local max/min. This contradicts what you have written here.(0 votes)
- It's absolutelly OK.
fxx*fyy - (fxy)^2 > 0 meaning it's max/min.
What you have provided here is this inequality*(-1). Proof:
-(fxx*fyy) + (fxy)^2 < 0 is still a local maximum/minimum.
Was my explanation clear enough?(4 votes)
- Under '[Phrased using the Hessian determinant]', we say that the second derivative test can be done using the determinant of the Hessian. This is only true for functions with input space in R2, right? With higher-dimensional input space, we need H to be positive definite. Correct?(1 vote)