If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## AP®︎/College Calculus BC

### Course: AP®︎/College Calculus BC>Unit 10

Lesson 12: Lagrange error bound

# Worked example: estimating eˣ using Lagrange error bound

Lagrange error bound (also called Taylor remainder theorem) can help us determine the degree of Taylor/Maclaurin polynomial to use to approximate a function to a given error bound. See how it's done when approximating eˣ at x=1.45.

## Want to join the conversation?

• At , we say M = e^2. I have no idea how we determined this. Is it just experience? I understand the interval 0 < x <= 2 contains x & C, but why not 1 < x <= 10? What is the logic behind e^2? •  If you use the larger interval, then your max on the interval is at e^10. That's a larger number than e^2, and thus, a worse bound. We choose [0,2] because it has the least upper bound we can get that has all the necessary values in the interval.
• At , why is zero part of the bound; can't it be 1.45? • I believe the bound could be 1.45 <= x <= 2. I'm not really sure why he chose to write 0 at the lower end of this bound. It doesn't really matter since e^x is increasing over this entire domain. We are concerned with finding the largest M that is within our bound, so that is going to be e^2 regardless of whether we choose the lower end of the bound to be 0 or 1.45.
• Is there another way to find the nth term necessary to have an error below a certain value (as in without using trial and error) ? • If we do many problems in estimations, we will find the right numbers faster.

There is a swedish word for this, 'ögonmått', which translates to eye measurement.

If we practice many times on the similar problems, we will do it with ease.

The bulletproof way of estimating the nth term of the Lagrange error is to rewrite e^x with sigma notation.

This sigma notation comes from the elementary Maclaurin expansion of e*x:

e^x=1+x+x^2/2+x^3/3!+...+[[[x^n/n!]]]+O(x^(n+1)

The next last part x^n/n! is the important bit here, because it will determine the Lagrange error.

So we can rewrite the Lagrange error for e^2 as following:

e^x= Σ(x^n/n!) from n=0 to infinity, so e^2=Σ(2^n/n!) from n=0 to N.

With this method you can see what the value of N should be to see if the error will be smaller than 10^-3.

When you got your N, use N terms in the Maclaurin expansion of e^x to get your estimate!

This is the general way of finding estimates for e^x that I was taught.

If we want to use other elementary functions like cosine, logarithms, polynomials or arcus we can use their Maclaurin expansions to find the Lagrange error instead.

I hope this was a little helpful!
• Why didn't Sal take out his trusty Ti-85? • I found this video helpful but I am still confused about natural log Lagrange error bounds. Could you please do a video featuring a natural log problem? The hints often show an additional step at the end, removing the z^(n+1) from the equation, since the equation without z is greater. My answers using z seem like they would be more accurate? (but are showing as wrong due to the final step removing z inexplicably for the hint's calculations). • I used to struggle with the same problem, here's how you could tackle it.
You are given the expression for the n-th derivative of the natural log function at the very beginning of the exercise, use this information to find the (n+1)th derivative of natural log of x (simply substitute n+1 for every n in the expression).
Recall from the lesson "Taylor polynomial reminder" that this expression gives you M, the upper bound of the error function. (https://www.khanacademy.org/math/ap-calculus-bc/bc-series-new/bc-10-12/v/proof-bounding-the-error-or-remainder-of-a-taylor-polynomial-approximation)

M is the maximum value of the (n+1)th derivative over the interval of interest: it's a function of n and has z as a parameter. Since M is the maximum you want to make it as big as possible, and since z can take any value between 1 and 1.6 you pick one: being z in the denominator you want it as small as possible.

This seemed counter intuitive to me, if M is the the upper bound for the error function i should make it as small as possible, shouldn't I?
Well, for the Lagrange inequality to be valid in the first place, you need to have the maximum possible value of M.

Hope this helps.
• Hi, I got stuck on the practice problem estimating ln(1.6), I used the hints, and followed the remainder down to:

Rn(1.6) </= 0.6^(n+1)/((n+1)*z^(n+1))

... I used z = 1.6, was I supposed to use 1 because the P(x) is centered at x=1 in this case? or are we supposed to choose z = the lowest x value on the interval of approximation?

Cheers, K • Hey Keiran Lond,
Here's the steps to using Langrage's Error Bound;
1. Find an expression for the (n + 1)th derivatie of f(x) (or whatever the function is).
2. Find the maximum value for the (n+1)th derivative of f(z) for any z between x and c.
3. Lagrange Bound for Error assures that;
` | M ||Rn,c(x)| <= |_________ * (x - c)^(n + 1)| | (n+1)! |`
3. Where M is the maximum value of the (n+1)th derivative of f(z) for any z between x and c. "c" is where the series is centered at (0 if Macluarin Series) and x is the plugged in value were's approximating.
4. Plug in M, x, and c for that equation, and simplify.
5. Find the lowest number of terms (n) that evaluates the expression to less than the error bound.

NOTE: The z is the value which returns the greatest value M, in the interval of the respective function.

Hope this helps,
- Convenient Colleague

Also, I think that the variables used in the different functions are sometimes changed or swapped, which can get confusing. If you can follow along my examples it should work fine (at least it does for me!).
• What does the error .001 really mean. Is it the physical difference between the b and a in the Tailor series or is it the area of the difference between the curves at a and b. • It is the maximum difference between the curves at b. In other words, if you want to use a Taylor polynomial, p(x), centered at a to approximate a function, f(x), then you would need to know f(a) and f'(a) and f''(a) and so on. The real value of this is that you can use p(x) to get approximate values for f(b). But these will be approximate. The maximum error - which is the difference between f(b) and p(b) - is 0.001 (in this case since you can set the maximum error to any number necessary based on your application).   