If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Taylor & Maclaurin polynomials intro (part 2)

Taylor & Maclaurin polynomials are a very clever way of approximating any function with a polynomial. In this video we come up with the general formula for the nth term in a Taylor polynomial. Created by Sal Khan.

Want to join the conversation?

  • blobby green style avatar for user menglish84
    At I am confused why Sal is only left with f'(c) after he expands f'(c)(x-c). You get f'(c)x - f'(c)c but then from there I just don't get it.
    (43 votes)
    Default Khan Academy avatar avatar for user
    • orange juice squid orange style avatar for user RKHirst
      You're taking the derivative. Here, x is variable; c, f'(c) are constants.

      If you expand f'(c)(x - c), you get
      f'(c)x - f'(c)c
      The second term is two constants: its derivative is zero.
      The first term is (some constant) * x: its derivative is (the constant). Which is f'(c).

      If f'(c) gets in your way, you can rewrite it to remember it is a constant. For example, let
      a = f'(c)
      d/dx ( ax ) = a
      (106 votes)
  • mr pink red style avatar for user Enya Hsiao
    I don't understand the point of a taylor expansion at c since the Maclaurin's series already did the job of evaluating it at zero. What is the additional function of a taylor series, or is it just trying to generalize expansions so we are not confined to only the point zero?
    (14 votes)
    Default Khan Academy avatar avatar for user
    • leaf green style avatar for user ArDeeJ
      Sometimes it's useful to be able to evaluate at some other point than zero, especially if the point we are actually interested in is far away from zero.

      Suppose we wanted to find the value of sin 100. If we used a zero-centered Taylor series, we would have to calculate very many terms to get an accurate answer.
      (31 votes)
  • blobby green style avatar for user Alonzo Archer
    Hi, first let me say thank you in advance for reviewing my questions. Second It should be noted that I am writing this because I am confused - so I realize that my lack of understanding may cause me to mischaracterize a some things. Also, a written expression of where I am are lost with this topic is not trivial to me.

    I have yet to connect all of the parts of the Taylor Series into a sensible story when it comes to the Taylor Series for functions like Sine. My understanding is that a Taylor Series expansion can actually be equivalent to the Sine function ( I am aware that not all Taylor expansions equal the Function in question). I get that the series starts with identifying a point from which to expand the series and having the derivatives of the Sine function (in this case) and its Taylor series expansion match at this point. The process then (at a high level) uses the derivatives at this single point as one of several components that make up the terms in the Taylor series (other components include increasing polynomial degrees that purposefully map to the derivatives of the chosen expansion point) . These terms are added together and the more we add, the better the approximation to Sine, not just at that single point - but for all the points that one can input for Sine.

    So what confuses me?

    How is it that the equivalence of the Taylor Series and Sine derivatives at a single point enables us to take the results of the individual results of the Taylor Series version, and place them collectively into a summation of terms that correctly maps all of the inputs and outputs for Sine? I have been struggling to find a good explanation as to why or how this step in the process works. How does combining the derivatives with the other components of the terms make the Sine Taylor Series work? How can one interpret a process that takes the individual results of derivatives at a single point, attach them to their appropriate polynomial and then add them together? This is especially confusing knowing that Sine clearly has a repetitive nature to it. I do see this reflected in the Taylor series expansion by the repetitive derivatives in the terms, but seeing how Sine's pattern is enforced from the rest of the components of the terms is not clear. The other components have a of pattern increasing polynomial degrees , as well as, the n'th term being divided by the n'th factorial to make the derivatives work out, but what role is this pattern playing in realizing the Sine pattern? Also, why is it that the more you add, the better the approximation to Sine?

    Perhaps there is another way to ask the question. What if you did not know how the Sine Taylor Series was constructed? You did not know things like it is made up of terms that allow the derivative to work out, etc. All you have is a polynomial that is claimed to approximate the Sine function for any input. Then how could one analyze and use the operations, objects, definitions, properties, and patterns associated with this infinite polynomial representation of Sine to conclude that it indeed is equivalent to the Sine function. And not just at a single point, but at all points?
    (14 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Francisco Cubillos
    The only thing I don't understand is the (x-c) part at , why put the c and not simply use x, x^2, x^3 and so on, like on the Mclaurin series?
    (4 votes)
    Default Khan Academy avatar avatar for user
  • spunky sam blue style avatar for user JaDeriv
    I'm having trouble understanding the difference between a Taylor Series and a Taylor Polynomial. Would anyone please explain how these relate to each other? Is a Taylor Series a string of a certain number of Taylor Polynomials?
    Thank you!
    (5 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Jorge Vasquez
    Why exactly does adding more derivatives make the function approximation better? I know that the 2nd derivative involves concavity, inflection points, etc. But I just don't understand its connection to function approximation?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • piceratops ultimate style avatar for user Just Keith
      You are trying to match two functions as closely as possible. Think of it this way:
      There are a vast number of functions that have both the same f(0) and the same slope at that point, There are fewer that have the same original value, slope, and concavity. Match the third derivative and you have eliminated even more functions. Keep matching subsequent derivatives and you will systematically keep eliminating functions that don't match. If you could take infinitely many derivatives, then you would eliminate all other functions and you would be left only with functions that are equal to each other in infinitely many ways -- in other words, they would be identical.
      (8 votes)
  • old spice man green style avatar for user Gavriel Feria
    Why x-c at an approximate ???
    (2 votes)
    Default Khan Academy avatar avatar for user
    • leaf yellow style avatar for user brian.g.neaves
      Do you remember how a parabola f(x) = x^2 can be translated? f(x) = x^2 is a parabola opening up, centered at 0 . A new function g(x) = (x - 2)^2 is just like f(x) = x^2, just moved over 2 units to the right on the x-axis. The same idea can be applied to this video. Instead of approximating the function at 0 , we approximate the function at a new point x = c. Hope that helps.
      (8 votes)
  • spunky sam blue style avatar for user Igor Stravinsky
    I don't understand the purpose of approximating a function using the maclaurin or taylor series. Doesn't the original function need to be given in order for it to be approximated? Why do we need a less accurate approximation of a function when we already have the function?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user tcauttero
      It can be used when the function equation is not known, as when obtained from making a plot. Also if the equation is very complex this can be used to simplify in order to make further calculations much easier. Also, not all functions are differentiable, continuous, or linear so approximating by a series allows you to solve equations using linear algebra.
      (2 votes)
  • leaf green style avatar for user Zeimer
    Why would anyone use it and mess with (x - c)^n when we can just use Maclaurin?
    (2 votes)
    Default Khan Academy avatar avatar for user
    • leaf green style avatar for user 123HeskeyTime
      If we are trying to approximate the function at an x value which isn't near zero then a Maclaurin series becomes inaccurate. On the other hand the generalized Taylor series will be accurate for any x we try to approximate the function at as we can choose where the function is centered ( the c value ).
      (4 votes)
  • piceratops ultimate style avatar for user Saraph
    At Sal writes (c-c). What exactly does this mean? I'm confused about relationship between x and c.
    (2 votes)
    Default Khan Academy avatar avatar for user

Video transcript

In the last several videos, we learned how we can approximate an arbitrary function, but a function that is differentiable and twice and thrice differentiable and all of the rest. How we can approximate a function around x is equal to 0 using a polynomial. If we just have a zero-degree polynomial, which is just a constant, you can approximate it with a horizontal line that just goes through that point. Not a great approximation. If you have a first-degree polynomial, you can at least get the slope right at that point. If you get to a second-degree polynomial, you can get something that hugs the function a little bit longer. If you go to a third-degree polynomial, maybe something that hugs the function even a little bit longer than that. But all of that was focused on approximating the function around x is equal to 0. And that's why we call it the Maclaurin series or the Taylor series at x is equal to 0. What I want to do now is expand it a little bit, generalize it a little bit, and focus on the Taylor expansion at x equals anything. So let's say we want to approximate this function when x-- so this is our x-axis-- when x is equal to c. So we can do the exact same thing. We could say, look, our first approximation is that our polynomial at c should be equal to-- or actually, even let me better it-- our polynomial could just be-- if it's just going to be a constant, it should at least equal to function whatever the function equals at c. So it should just equal f of c. f of c is a constant. It's that value right over there. We're assuming that c is given. And then you would have-- this would just be a horizontal line that goes through f of c. That's p of x is equal to f of c. Not a great approximation, but then we could try to go for having this constraint matched, plus having the derivative matched. So what this constraint gave us-- just as a reminder-- this gave us the fact that at least p of c, the approximation at c, our polynomial at c, at least is going to be equal to f of c, right? If you put c over here, it doesn't change what's on the right-hand side, because this is just going to be a constant. Now, let's get the constraint one more step. What if we want a situation where this is true, and we want the derivative of our polynomial to be the same thing as the derivative of our function when either of them are at c. So for this situation, what if we set up our polynomial-- and you'll see a complete parallel to what we did in earlier videos. We're just going to shift it a little bit for the fact that we're not at 0. So now, let's define p of x to be equal to f of c plus f prime of c. So whatever the slope is at this point of the function, whatever the function slope is, times-- and you're going to see something slightly different over here-- x minus c. Now, let's think about what this minus c is doing. So let's test, first of all, that we didn't mess up our previous constraint. So let's evaluate this at c. So now, we know that p of c-- and I'm using this exact example-- so p of c-- let me do this in a new color. Let me try it out. So p-- that's not a new color. p of c is going to be equal to f of c plus f prime of c times c minus c. Wherever you see an x, you put a c in there. c minus c. Well, this term right over here is going to be 0. And so this whole term right over here is going to be 0. And so you're just left with p of c is equal to f of c. You're just left with that constraint right over there. And the only reason why we were able to blank out this second term right over here is because we had f prime of c times x minus c. The x minus c makes all of the terms after this irrelevant. We can go now verify that this is now true. So let's try-- p prime of x is going to be the derivative of this, which is just 0, because this is going to be a constant, plus the derivative of this right over here. And what's that going to be? Well, that's going to be-- you can expand this out to be f prime of c times x minus f prime of c times c, which would just be constant. So if you take the derivative of this thing right here, you're just going to be left with an f prime of c. So the derivative of our polynomial is now constant. So obviously, if you were to evaluate this at c, p prime at c, you're going to get f prime of c. So once again, it meets the second constraint. And now when you have both of these terms, maybe our approximation will look something like this. It will at least have the right slope as f of x. Our approximation is getting a little bit better. And if we keep doing this-- and we're using the exact same logic that we used when we did it around 0, when we did the Maclaurin expansion-- you get the general Taylor expansion for the approximation of f of x around c to be the polynomial. So the polynomial p of x is going to be equal to-- and I'll just expand it out. And this is very similar to what we saw before. f of c plus f prime of c times x minus c. You might even guess what the next few terms are going to be. It's the exact same logic. Watch the videos on Maclaurin series where I go for a few more terms into it. It becomes a little bit more complicated taking the second and third derivatives, and all of the rest just because you have to expand out these binomials, but it's the exact same logic. So then you have plus your second-degree term, f prime prime of c, divided by 2 factorial. And this is just like what we saw in the Maclaurin expansion. And just to be clear, you could say that there's a 1 factorial down here. I didn't take the trouble to write it because it doesn't change the value. And then that times x minus c squared plus the third derivative of the function evaluated at c over 3 factorial times x minus c to the third power. And I think you get the general idea. You can keep adding more and more and more terms like this. Unfortunately, it makes it a little bit harder, especially if you're willing to do the work. It's not so bad, but instead of having just x here and instead of just having an x squared here, and having an (x-c) squared having an (x-c) to the third, this makes the analytical math a little bit hairier, a little bit more difficult. But this will approximate your function better as you add more and more terms or on an arbitrary value as opposed to just around x is equal to 0. And I'll show you that using WolframAlpha in the next video.