Main content
Multivariable calculus
Course: Multivariable calculus > Unit 3
Lesson 1: Tangent planes and local linearizationLocal linearization
Learn how to generalize the idea of a tangent plane into a linear approximation of scalar-valued multivariable functions.
Background
What we're building to
- Local linearization generalizes the idea of tangent planes to any multivariable function. Here, I will just talk about the case of scalar-valued multivariable functions.
- The idea is to approximate a function near one of its inputs with a simpler function that has the same value at that input, as well as the same partial derivative values.
- Written with vectors, here's what the approximation function looks like:
- This is called the local linearization of f near start bold text, x, end bold text, start subscript, 0, end subscript.
Tangent planes as approximations
In the previous article, I talked about finding the tangent plane to a two-variable function's graph.
The formula for the tangent plane ended up looking like this.
This function T, left parenthesis, x, comma, y, right parenthesis often goes by a different name: The "local linearization" of f at the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis. You can think about this as the simplest function satisfying two properties:
- It has the same value of f at the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis.
- It has the same partial derivatives as f at the point left parenthesis, x, start subscript, 0, end subscript, comma, y, start subscript, 0, end subscript, right parenthesis.
As always in multivariable calculus, it is healthy to contemplate a new concept without relying on graphical intuition. That's not to say you should not try to think visually. Maybe instead think purely about the input space, or think relevant transformation rather than the graph.
Fundamentally, a local linearization approximates one function near a point based on the information you can get from its derivative(s) at that point.
In the case of functions with a two-variable input and a scalar (i.e. non-vector) output, this can be visualized as a tangent plane. However, with higher dimensions we don't have this visual luxury, so we are left to think about it just as an approximation.
In real-world applications of multivariable calculus, you almost never care about an actual plane in space. Instead, you might have some complicated function, like, oh, I don't know, air resistance on a parachute as a function of speed and orientation. Dealing with the actual function may be tricky or computationally expensive, so it's helpful to approximate it with something simpler, like a linear function.
What do I mean by "Linear function"?
Consider a function with a multidimensional input.
This function is called linear if in its definition, all the coordinates are just multiplied by constants, with nothing else happening to them. For example, it might look like this:
The full story of linearity goes deeper (hence the existence of the field "Linear algebra"), but for now, this conception will do. Typically, instead of writing out all the variable like this, you would treat the input as a vector:
And you would define the function using a dot product:
For the purposes of this article, and more generally when you talk about local linearization, you are allowed to add in a constant to this expression:
If you wanted to be pedantic, this is no longer a linear function. It's what's called an "affine" function. But most people would say "whatever, it's basically linear".
Local linearization
Now, suppose your function f, left parenthesis, start bold text, x, end bold text, right parenthesis does not have the luxury of being linear. (The bolded "start bold text, x, end bold text" still represents a multidimensional vector). It might be defined by some crazy expression way more wild than a dot product.
The idea of a local linearization is to approximate this function near some particular input value, start bold text, x, end bold text, start subscript, 0, end subscript, with a function that is linear. Specifically, here's what that new function looks like:
- Notice, by plugging in start bold text, x, end bold text, equals, start bold text, x, end bold text, start subscript, 0, end subscript, you can see that both functions f and L, start subscript, f, end subscript will have the same value at the input start bold text, x, end bold text, start subscript, 0, end subscript.
- The vector dotted against the variable start bold text, x, end bold text is the gradient of f at the specified input, del, f, left parenthesis, start bold text, x, end bold text, start subscript, 0, end subscript, right parenthesis. This ensures that both functions f and L, start subscript, f, end subscript will have the same gradient at the specified input. In other words, all their partial derivative information will be the same.
I think the best way to understand this formula is to basically derive it for yourself in the context of a specific function.
Example 1: Finding a local linearization.
Problem: Have yourself a function:
Find a linear function L, start subscript, f, end subscript, left parenthesis, x, comma, y, comma, z, right parenthesis such that the value of L, start subscript, f, end subscript and all its partial derivatives match those of f at the following point:
Step 1: Evaluate f at the chosen point
Step 2: Use this to start writing your function. Which of the following functions will be guaranteed to equal f at the input left parenthesis, x, comma, y, comma, z, right parenthesis, equals, left parenthesis, 8, comma, 4, comma, 3, right parenthesis?
The partial derivatives of L, start subscript, f, end subscript, as you have written it so far, are precisely these constants start color #11accd, a, end color #11accd, start color #0d923f, b, end color #0d923f and start color #e84d39, c, end color #e84d39. So to force our function to have the same partial derivative information as f at the point left parenthesis, 8, comma, 4, comma, 3, right parenthesis, we just need to set these constants equal to the corresponding partial derivatives of f at this point.
Step 3: Compute each partial derivative of f, left parenthesis, x, comma, y, comma, z, right parenthesis, equals, z, e, start superscript, x, squared, minus, y, cubed, end superscript
Now we evaluate each of these at left parenthesis, 8, comma, 4, comma, 3, right parenthesis.
Step 4: Replacing the constants start color #11accd, a, end color #11accd, start color #0d923f, b, end color #0d923f and start color #e84d39, c, end color #e84d39 in the expression of L, start subscript, f, end subscript with these partial derivative values, what do you get?
Now notice what this looks like if you write it with vector notation.
It is just a specific form of the general formula shown above.
Example 2: Using local linearization for estimation
What follows is by no means a practical application, but working through it will help give a feel for what local linearization is doing.
Problem: Suppose you are on a desert island without a calculator, and you need to estimate square root of, 2, point, 01, plus, square root of, 0, point, 99, plus, square root of, 9, point, 01, end square root, end square root, end square root. How would you do it?
Solution:
We can view this problem as evaluating a certain three-variable function at the point left parenthesis, 2, point, 01, comma, 0, point, 99, comma, 9, point, 01, right parenthesis, namely
I don't know about you, but I'm not sure how to evaluate square roots by hand. If only this function was linear! Then working it out by hand would only involve adding and multiplying numbers. What we could do is find the local linearization at a nearby point where evaluating f is easier. Then we can get close to the right answer by evaluating the linearization at the point left parenthesis, 2, point, 01, comma, 0, point, 99, comma, 9, point, 01, right parenthesis.
The point we care about is very close to the much simpler point left parenthesis, 2, comma, 1, comma, 9, right parenthesis, so we find the local linearization of f near that point. As before, we must find
- All partial derivatives of f at left parenthesis, 2, comma, 1, comma, 9, right parenthesis
The first of these is
Looks like someone chose a few convenient input values, eh?
On to the partial derivatives (heavy sigh). Since the square roots are abundant, let's write out for ourselves the derivative of square root of, x, end square root.
Okay, here we go. The simplest partial derivative is f, start subscript, x, end subscript
Since y is nestled in there, f, start subscript, y, end subscript requires some chain rule action:
Nestled even deeper, that tricky z will require two iterations of the chain rule:
Next, evaluate each one of these at left parenthesis, 2, comma, 1, comma, 9, right parenthesis. This might seem like a lot, but they are all made up of the same three basic components:
Plugging these values into our expressions for the partial derivatives, we have
Unraveling the formula for local linearization, we get
Finally, after all this work, we can plug in left parenthesis, x, comma, y, comma, z, right parenthesis, equals, left parenthesis, 2, point, 01, comma, 0, point, 99, comma, 9, point, 01, right parenthesis to compute our approximation
Calculating this by hand still isn't easy, but at least it's doable. When you work it out, the final answer is
Had we just used a calculator, the answer is
So our approximation is pretty good!
Why do we care?
Although it is not common to find yourself estimating square roots on a desert island (at least where I'm from), what is common in the contexts of math and engineering is wrangling with complicated but differentiable functions. The phrase "just linearize it" is tossed around so much that not knowing what it means could be awkward.
Remember, a local linearization approximates one function near a point based on the information you can get from its derivative(s) at that point. Even though you can use a computer to evaluate functions, that's not always enough.
- You might need to evaluate it many thousands of times per second, and working it out in full takes too long.
- Maybe you don't even have the function explicitly written out, and you just have a few measurements near a point which you wish to extrapolate.
- Sometimes what you care about is the inverse function, which can be hard or even impossible to find for the function as a whole, whereas inverting linear functions is relatively straight-forward.
Summary
- Local linearization generalizes the idea of tangent planes to any multivariable function.
- The idea is to approximate a function near one of its inputs with a simpler function that has the same value at that input, as well as the same partial derivative values.
- Written with vectors, here's what the approximation function looks like:
- This is called the local linearization of f near start bold text, x, end bold text, start subscript, 0, end subscript.
Want to join the conversation?
- At "Example 1", given that f(x, y, z) = e^(x^2-y^3), shouldn't f(4, 8, 3) be 3*e^(4^2-8^3) = 3*e^(-496)≃0 ?(13 votes)
- You have the function the incorrect way around I believe. it is f(8,4,3). NOT f(4,8,3)(3 votes)
- Also, in example one, when they show what the plane looks like using vector notation, shouldn't the second vector components be x-8 not x, and y-4, not y and etc?(14 votes)
- Is "the local linearization" the same as "Linear approximation"(https://en.wikipedia.org/wiki/Linear_approximation) ?(4 votes)
- These are the same. To clarify the equality, compare this article to the formula that follows after quote: "Linear approximations for vector functions of a vector variable are obtained in the same way, with the derivative at a point replaced by the Jacobian matrix."
(a, b) at Wikipedia is (x0, y0) here.(4 votes)
- Why does local linearization sound similar to taylor series?!(4 votes)
- Because it is indeed a (first-order) Taylor expansion for the original function! It's just that in this case you're trying to approximate a multivariable function, and so you're using partial derivatives instead of regular derivatives.(3 votes)
- Under Example 1, after the line "Now notice what this looks like if you write it with vector notation," under the explanation, the last matrix should have the terms "x-8, y-4, and z-3.(4 votes)
- Can this be edited to include an example involving the linear localization of a vector valued function? Maybe something simple like from $\mathbb{R}^2 \to \mathbb{R}^3$?(3 votes)
- Does it have anything to do with approximating a single variable function with Taylor series, when we have just two first terms?(3 votes)
- Yep! This is essentially the first order Taylor polynomial, but instead of dealing with the single variable case we are dealing with many.(1 vote)
- "...we consider all inputs to be part of a vector x..."
Should vector x have components x.y and so on instead of x0, y0 and so on?(2 votes) - Coming from Geggles' comment below, If "Example 1" is changed to have the tangent point be (8,4,3) it becomes a much nicer problem resulting in
Lf(x,y,z) = 3 + 48(x-8) - 144(y-4) + (z-3)(2 votes) - Why doesn't the partial with respect to z in the last example (desert island) have a coefficient of 1/8? Doesn't 1/2*1/2*1/2 = 1/8, not 1/6? Did I do that wrong?(1 vote)