Examples of the Lagrangian and Lagrange multiplier technique in action.
Lagrange multiplier technique, quick recap
When you want to maximize (or minimize) a multivariable function
subject to the constraint that another multivariable function equals a constant, , follow these steps:
- Step 1: Introduce a new variable
, and define a new function as follows:This function is called the "Lagrangian", and the new variable is referred to as a "Lagrange multiplier"
- Step 2: Set the gradient of
equal to the zero vector.In other words, find the critical points of .
- Step 3: Consider each solution, which will look something like
. Plug each one into . Or rather, first remove the component, then plug it into , since does not have as an input. Whichever one gives the greatest (or smallest) value is the maximum (or minimum) point your are seeking.
Example 1: Budgetary constraints
Suppose you are running a factory, producing some sort of widget that requires steel as a raw material. Your costs are predominantly human labor, which is
per hour for your workers, and the steel itself, which runs for per ton. Suppose your revenue is loosely modeled by the following equation: represents hours of labor represents tons of steel
If your budget is
, what is the maximum possible revenue?
per hour labor costs and per ton steel costs tell us that the total cost of production, in terms of and , is
Therefore the budget of
can be translated to the constraint
Before we dive into the computation, you can get a feel for this problem using the following interactive diagram. You can see which values of
yield a given revenue (blue curve) and which values satisfy the constraint (red line).
Since we need to maximize a function
, subject to a constraint, , we begin by writing the Lagrangian function for this setup:
Next, set the gradient
equal to the vector. This is the same as setting each partial derivative equal to . First, we handle the partial derivative with respect to .
Next, we handle the partial derivative with respect to
Finally we set the partial derivative with respect to
equal to , which as always is just the same thing as the constraint. In practice, you can of course just write the constraint itself, but I'll write out the partial derivative here just to make things clear.
Putting it together, the system of equations we need to solve is
In practice, you should almost always use a computer once you get to a system of equations like this. Especially because the equation will likely be more complicated than these in real applications. Once you do, you'll find that the answer is
This means you should employ about
hours of labor, and purchase tons of steel, which will give a maximum revenue of
The interpretation of this constant
is left to the next article
Example 2: Maximizing dot product
Problem: Let the three-dimensional vector
be defined as follows.
Consider every possible unit vector
in three-dimensional space. For which one is the dot product the greatest?
The diagram below is two-dimensional, but not much changes in the intuition as we move to three dimensions.
If you are fluent with dot products, you may already know the answer. It's one of those mathematical facts worth remembering. If you don't know the answer, all the better! Because we will now find and prove the result using the Lagrange multiplier method.
First, we need to spell out how exactly this is a constrained optimization problem. Write the coordinates of our unit vectors as
, and :
The fact that
is a unit vector means its magnitude is :
This is our constraint.
means maximizing the following quantity:
The Lagrangian, with respect to this function and the constraint above, is
We now solve for
by setting each partial derivative of this expression equal to .
Remember, setting the partial derivative with respect to
equal to just restates the constraint.
, and in the first three equations above, we get
Ah, what beautiful symmetry. Each of these expressions has the same
factor, and the coefficients , and match up with the coordinates of . Being good math students as we are, we won't let good symmetry go to waste. In this case, combining the three equations above into a single vector equation, we can relate and as follows:
is proportional to ! Geometrically, this means points in the same direction as . There are two unit vectors proportional ,
- One which points in the same direction, this is the vector that
- One which points in the opposite direction. This one
We can write these two unit vectors by normalizing
, which just means dividing by its magnitude:
is , so we can write the maximizing unit vector explicitly as like this:
Just skip the Lagrangian
If you read the last article, you'll recall that the whole point of the Lagrangian
is that setting encodes the two properties a constrained maximum must satisfy:
- Gradient alignment between the target function and the constraint function,
- The constraint itself,
When working through examples, you might wonder why we bother writing out the Lagrangian at all. Wouldn't it be easier to just start with these two equations rather than re-establishing them from
every time? The short answer is yes, it would be easier. If you find yourself solving a constrained optimization problem by hand, and you remember the idea of gradient alignment, feel free to go for it without worrying about the Lagrangian.
In practice, it's often a computer solving these problems, not a human. Given that there are many highly optimized programs for finding when the gradient of a given function is
, it's both clean and useful to encapsulate our problem into the equation .
Furthermore, the Lagrangian itself, as well as several functions deriving from it, arise frequently in the theoretical study of optimization. In this light, reasoning about the single object
rather than multiple conditions makes it easier to see the connection between high-level ideas. Not to mention, it's quicker to write down on a blackboard.
In either case, whatever your future relationship with constrained optimization might be, it is good to be able to think about the Lagrangian itself and what it does. The examples above illustrate how it works, and hopefully help to drive home the point that
encapsulates both and in a single equation.
Want to join the conversation?
- In example 2, why do we put a hat on u? Is it because it is a unit vector, or because it is the vector that we are looking for?(10 votes)
- I have seen some questions where the constraint is added in the Lagrangian, unlike here where it is subtracted. e.g. Lagrangian = f(x) + λg(x)
How does one decide whether to add or subtract?
- Hello, I have been thinking about this and can't really understand what is happening. So suppose I want to maximize
f(x,y,z) = 5xy + 8xz + 3yz, with the constraint 2xyz = 1920.
After doing the multiplier method, I only get one solution. Does that mean that the function does not have a maximum or a minimum? Or how can I tell if the solution I get represents or not a maximum?(5 votes)
- Hi everyone, I hope you all are well. When Grant writes that "therefore u-hat is proportional to vector v!" in example two, is the exclamation point representing a factorial symbol or just something for "wow" exclamation? Thanks for your help.(4 votes)
- Instead of constraining optimization to a curve on x-y plane, is there which a method to constrain the optimization to a region/area on the x-y plane. Like the region
x^2+y^2<=2 which r all the points in the unit circle including the boundary.(3 votes)
- Hello and really thank you for your amazing site. Can you please explain me why we dont use the whole Lagrange but only the first part? Why we dont use the 2nd derivatives(2 votes)
- In the step 3 of the recap, how can we tell we don't have a saddlepoint? The fact that you don't mention it makes me think that such a possibility doesn't exist. But it does right?(2 votes)
- how to solve ∇L=0 when they are not linear equations?(2 votes)
- When you have non-linear equations for your variables, rather than compute the solutions manually you can use computer to do it. Apps like Mathematica, GeoGebra and Desmos allow you to graph the equations you want and find the solutions. I myself use a Graphic Display Calculator(TI-NSpire CX 2) for this.(1 vote)
- what shuld we do if we have constraints as well as boundaries and we need a local extrima?(2 votes)
- It should pretty much be the same thing, except you only look at the solutions that are within the boundary, it's possible that the highest or lowest value can be at the boundary, but that would be a global extreme, not a local one.(1 vote)
- Is there a similar method of using Lagrange multipliers to solve constrained optimization problems for integer solutions?(1 vote)