If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Multivariable calculus

### Course: Multivariable calculus>Unit 3

Lesson 3: Optimizing multivariable functions

# Multivariable maxima and minima

A description of maxima and minima of multivariable functions, what they look like, and a little bit about how to find them. Created by Grant Sanderson.

## Want to join the conversation?

• Show me how to find maxima of three variables
• I believe that the process for finding maxima and minima with 3 variables is exactly the same, you would just put another term into the gradient vector. However, it's not really possible to visualize this many dimensions, so they can't 'show' you, per se.
• can we optimize multivariable functions without graphing them?
• Yes, you can, for more complex multivariable functions you would use algorithms like steepest descent/accent, conjugate gradient or the Newton-Raphson method. These methods are generally referred to as optimisation algorithms. Simplistically speaking, they work as follows:
1) What direction should I move in to increase my value the fastest? The gradient.
2) Take a small step in that direction.
3) Go to step 1, unless somebody tells me to stop or the gradient is zero.
This will not guarantee that you reach THE global maximum, only that you will reach A maximum (most likely a local one).
• Is it possible to find maxima and minima from the divergence of the gradient? Maximas have very negative divergence and minimas have very positive divergence?
• I guess, but if you want to do that, you'll need to find the maximum of the divergence of the gradient of the function. How do you find the maximum of the divergence of the gradient of the function? You can find the maximum of the divergence of the gradient of the divergence of the gradient of the function. Um, er...

You were trying to find the maximum of something, and you do that by finding the maximum of something else? Okay.

It's much easier to just let the gradient be 0. Once you've found an extremum, can you use the divergence of the gradient to determine whether it is a maximum or minimum?

Kind of, but there are saddle points. Saddle points can have nonzero divergence of the gradient. So you need to apply the second derivative test first, with the hessian matrix's determinant. But after applying that test, you can find if it's a max or min just by using one partial derivative, so there's no need for the divergence anymore.

The divergence is the trace of the hessian matrix, which is related to its determinant but not quite the same (trace is the sum of the diagonal entries of a matrix).
• what about absolute maxima and absolute minima? is this explained somewhere ?
• Can't we use Laplacian(f)(x_0,y_0) < 0 for optimization?
• Well, the Laplacian is sort of like a 2nd derivative thing, while for optimization one tends to use the first derivative.
(1 vote)
• I wouldn't say it's just a matter of notational convenience to say Del f = 0 -- it makes a lot of sense, that none of the directions show the direction of steepest ascent, and that the slope in each of these directions is zero.
• Of course, it's often possible there are no solutions to them both being zero even when there are solutions to either of them individually being zero.