If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

### Course: AP®︎/College Calculus BC>Unit 10

Lesson 12: Lagrange error bound

# Taylor polynomial remainder (part 2)

The more terms we have in a Taylor polynomial approximation of a function, the closer we get to the function. But HOW close? In this video, we prove the Lagrange error bound for Taylor polynomials. Created by Sal Khan.

## Want to join the conversation?

• In the video when you integrate both sides isn't the value of n supposed to increase by one therefore making it n+1 instead of n-1?
• Its the derivative thats decreasing. If you take the integral of f'(x) its f(x) + C. If you take the integral of f''(x) its f'(x) + C. You can see that the derivative is decreasing from ' to none and '' to '.
• I dont understand how Sal goes from the integral of E(n+1)(x) dx. to just E(n)(x), around the 10 minute mark.
• The function is not E^(n+1). The (n+1) is actually how many derivatives we've taken. So when we take the antiderivative, we "lose" one derivative and go to (n+1). It's not a power, it's a derivative :). Hope this helps!
• Can anyone explain to me why at , where -ma<=c,why we have to take the lower bound of c, making c=-ma? I am really confused at that part.
• This part actually has an error, which can be fixed by using a definite integral between x and a (or a and x, depending on whether x is less than a), rather than an indefinite integral. I have sent a message to Mr. Khan about this error, so hopefully this gets corrected soon. Anyway, a definite integral with these bounds gets you from step to step quite smoothly :) Hope that helps!
• Sal just assumed we know M in this video, but how would one actually find it?
• Since the function is continuous in the interval of [a, b], you know there will be a maximum value. It can be f(b), f(a), or f(c) where f`(c) = 0 (extreme value).
• Why are we free to choose the integration constant valur in the limit bounds of the error function?? can we always choose any value of c while finding a function
• we aren't "free" to choose c, it must be within certain constraints that arise when trying to calculate it. we choose the smallest value we can so that the potential difference between the rhs and lhs is as small as possible.
• Are there any videos that instruct on situations where M is given?
• Is it possible to make a better bound?
• In a general sense, for any given n, there is no better bound. You can prove this to yourself by constructing examples where E(x) is exactly equal to the bound shown in the video. Here is one such example. Let's say that f(x) = x + x^2 / 2 and that one takes a Taylor polynomial approximation with degree 1 ( n = 1 ) at zero ( a = 0). Then, the polynomial approximation is P(x) = x; the error function is E(x) = x^2 / 2; E''(x) = 1 and, thus, M = 1; and M (x-a)^(n+1) / (n+1)! = 1 * x^2 / 2! = E(x).

Of course, it is true that you can get a better approximation by increasing n. In the above example, you get a perfect approximation ( E(x) = 0 ) by increasing n from 1 to 2.
• When taking the antiderivative of the remainder at why does that not have a constant added to it?

Shouldn't anti[E^(n+1)(x)] be equal to E^(n)(x) + C ?
• It should indeed be E^(n)(x) + C, which Sal says if you wait until , after which we goes further and finds an appropriate value for C. You just need to have patience.
• You need to figure it out yourself. Let's say you have the function `e^x` and you have a Maclaurin polynomial with degree 3. The polynomial is `1 + 1/2 * x^2 + 1/6 * x^3 + 1/24 * x^4`. Let's also say that you want to approximate it at `0.1`. The biggest the function can get in the interval `[0, 0.1]` is obviously `e^0.1`, since `e^x` is an increasing function. This means that the `M` in this case will be `e^0.1`.