If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Maclaurin series of cos(x)

Approximating cos(x) with a Maclaurin series (which is like a Taylor polynomial centered at x=0 with infinitely many terms). It turns out that this series is exactly the same as the function itself! Created by Sal Khan.

Want to join the conversation?

  • blobby green style avatar for user smaheshs
    I could understand the steps, end result, but a question as to how some function determined at 0 is valid over the full range. What I feel is since cosine is periodic function this holds true. Someone correct me if I am wrong.
    (19 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user Matthew Manes
      Two points.
      1. If you look at the pattern of your derivatives, you'll see that after 4 derivatives, it goes back to it's original derivative, which means that it will just continue to repeat this pattern no matter how far you go out.
      2. It then has to do with the series representation. If you look at the series representation, you'll see that it's E ((-1)^k)*x^2k)/(2k!). This is for all k's. You'll notice that the x is just x^2k. If it were (x^2k)-3, then it would be centered at 3. The formula for "power" series is E an(x-c)^n. If there is no "c" in the series representation, then the function is centered at 0.

      Hope this made some sense.
      (20 votes)
  • hopper jumping style avatar for user Nils Petter
    So... the Taylor Series is a way to represent a function? :S
    (10 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user Ivan Barreto
      With Taylor and Maclaurin series you can approximate a function with a polynomial. This is useful because you can turn a complicated function (defined by a limit, for example) into simple multiplication and exponentiation of numbers. That's how your calculator gets the sine and cosine of angles almost immediately: It doesn't have a table of sines and cosines for all possible values, it approximates them using Taylor/Maclaurin series
      (40 votes)
  • leaf green style avatar for user Nathan Mayer
    This is supposed to approximate the function cos(x). Does it approximate the entire function all the way to infinity, or only near x=0?
    (6 votes)
    Default Khan Academy avatar avatar for user
  • mr pants teal style avatar for user Mehdi
    Is it actually proven that the Maclaurin/Taylor-representations of these functions are EQUIVALENT to the corresponding functions if the number of terms in the representation goes to infinity?
    (3 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Bruno Mansur
    Why is -sin(x) the derivative of cos(x)? I mean, where does this information comes from? How can I calculate this?

    at
    (1 vote)
    Default Khan Academy avatar avatar for user
    • piceratops ultimate style avatar for user Just Keith
      There are a variety of ways to prove this. I prefer direct computation using the definitions of the sine and cosine functions:
      Note that:
      sin x ≡ ½*i*e^(-x*i) - ½*i*e^(x*i)
      cos x ≡ ½ e^(-x*i) + ½e^(x*i)

      Thus:
      d/dx cos x = d/dx { ½ e^(-x*i) + ½e^(x*i)}
      d/dx cos x = d/dx {½ e^(-x*i)} + d/dx {½e^(x*i)}
      d/dx cos x = ½ d/dx e^(-x*i) + ½ d/dx e^(x*i)
      d/dx cos x = ½ e^(-x*i) d/dx(-x*i) + ½ e^(x*i) d/dx(x*i)
      d/dx cos x = ½ e^(-x*i)(-i) + ½ e^(x*i)(i)
      d/dx cos x = − [½ i e^(-x*i) - ½ i e^(x*i)]
      d/dx cos x = − sin x
      (11 votes)
  • blobby green style avatar for user markthom
    At around he states that the steps work whether you are using radians or degrees, but it doesn't seem reasonable that the polynomial will give you the same answer whether you use x = pi/2 or x=90. Is there some reason that you makes this approximation work specifically with radians?
    (3 votes)
    Default Khan Academy avatar avatar for user
  • old spice man green style avatar for user Pedro
    How is Maclaurin and Taylor Series related to Complex Number's Polar Form, given that they share the same playlist?

    I'm honestly not getting what the relation between these two might be.
    (5 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user TacoBFF
    Why exactly does taking derivatives at a point give you the function of the polynomial? I don't get the part of how the 3rd,4th,5th etc. derivative will attempt to match the slope at any point other than at zero, is it because the slope of the slope at zero approximates the points around it?
    (2 votes)
    Default Khan Academy avatar avatar for user
    • spunky sam blue style avatar for user Ethan Dlugie
      Here's the basic idea: Hm, I wonder if I can make a polynomial that has all of the same derivatives at a point as a particular function? Look at that, now we can.
      And here's a bonus: it even looks like the function! In fact, for certain functions, I can "tailor" (pun intended) an infinite polynomial that exactly equals the function at all values of x just by this process of creating a polynomial with the same derivatives as my function.
      (4 votes)
  • blobby green style avatar for user Lucas Mezalira
    I transformed that last polynomial equation ( produced after the Mac Lauren series was applied ) and transformed it into a power series summation ( like Sal taught us on his last playlist), and I got :

    Summation from n=0 to infinity of - ( x^2 / 2n! )^n , which is 1 / [ 1 + ( x^2/2n! )] with radius of convergence x < sqrt( 2n! ).

    So my question is, is that correct? Can we represent cos(x) three different ways (MacLaurin polynomial, power series(sigma notation), power series summation after applying the formula 1 / 1-r ) ??
    (3 votes)
    Default Khan Academy avatar avatar for user
  • piceratops ultimate style avatar for user zzzach99
    If we define the MacLaurin series for cos x as ∑ from n = 0 to infinity of (-1)^n * x^(2n)/(2n!) wouldn't we artificially take x = 0 out of the domain because 0^0 is not defined? I know this notation isn't used in this video but I've seen it used before, including in Khan Academy exercises.
    (1 vote)
    Default Khan Academy avatar avatar for user

Video transcript

In the last video, we hopefully set up some of the intuition for why - or I should say what - the Maclaurin series is all about, and I said at the end of the videos that a Maclaurin series is just a special case of a Taylor series. In the case of a Maclaurin series, we're approximating this function around x is equal to 0, and a Taylor series, and we'll talk about that in a future video, you can pick an arbitrary x value - or f(x) value, we should say, around which to approximate the function. But with that said, let's just focus on Maclaurin, becuase to some degree it's a little bit simpler, and that by itself can lead us to some pretty profound conclusions about mathematics, and that's actually where I'm trying to get to. So let's take the Maclurin series of some interesting functions and I'm gonna do functions where it's pretty easy to take the derivatives, and you can /keep/ taking their derivatives over and over and over and over and over again. So let's take the Maclaurin series of cosine of x, so if f(x)=cos(x), then - before I even apply this formula, that we somewhat derived in the last video, or at least got the intuitive for in the last video - let's take a bunch of derivatives of f(x), just so we have a good sense of it. So, if we take the first derivative, if we take the first derivative, derivative of cos(x) = -sin(x) if we take the derivative of that, if we take the derivative of that, derivative of sin(x) is cos(x), and we have the negative there, so it's -cos(x) so if we take the derivative of that, so this is the third derivative of cos(x), now it's just going to be positive sine of x, and if we take the derivative of that, we get cos(x) again. We get cosine of x again. So if we take the derivative of that, this is the fourth derivative, I should, I should use this notation but you get the idea, we'll get cos(x) again. And if you look at what we talked about in the last video, we want the difference - we want the function, and we want it's various derivatives evaluated at 0, so let's evaluate it at 0. So f(0), cos(0) is 1, cosine of zero is one. Whether you're talking about zero radians or zero degrees, doesn't matter, sine of zero is zero, so this is f prime of - f prime of zero, is zero. And then cos(0) is, once again, one, but we have the negative out there, so it becomes negative one. So f - the second derivative evaluated at zero is negative one. Let's take the third derivative, the third derivative evaluated at zero well, sine of zero is just zero, and then the fourth derivative evaluated at zero, cosine of zero is one. So f prime prime prime at zero is now equal to one. So you see an interesting pattern here - one, zero, negative one, zero, one, then you go to zero, then you go to negative one, zero. So if we were to apply this to find it's Maclaurin representation, what would we get? Let me do my best attempt at this. So we would get, our polynomial would be - so our polynomial approximation of cosine of x is going to be f(0), f(0) is one, and then we have one plus f'(0) times x. But f'(0) is just zero, so we're not going to have this term over there, it's going to be zero times x, I won't even take the trouble of writing it down, it would be this zero time x, then plus f prime prime or second derivative, which is negative one, so I'll write negative - negative, this is a negative one right here, this is a negative one, times x squared, times x squared, over 2 factorial - over two factorial, which in this case is just going to be two. But I'll just write it down here as two factorial, to make the pattern a little bit more obvious, and then we go to the next term, the third derivative evaluated at zero but the third derivative evaluated at zero is just zero, so this term won't be there as well, then you go to the fourth derivative, the fourth derivative evaluated at zero is positive one, so this coefficient right here is going to be a one, and so you're going to have one times x to the fourth over four factorial, so plus x to the fourth over four factorial, and I think you start seeing a pattern now. You have sign switches - and you would see this if we kept going, so you can verify it for yourself if you don't believe me - so you have a positive sign, a negative sign, a positive sign, and then a negative sign, so on and so forth, and this is, uh, one times x to the zeroth power, then you jump two to x to the squared, jump two to x to the fourth, and so if we kept that up, we'd have a positive sign, now we have a negative sign, it would be x to the sixth over six factorial, then you have a positive sign, x to the eighth over eight factorial, and then you'd have a negative sign, x to the tenth over ten factorial, and you can just keep going that way. And if you kept going with this series, this would be the polynomial representation of cosine of x. And it's frankly just kind of cool that if can be represented this way. It's a pretty simple pattern here for a trigonometric function. Once again, it kind of tells you that all of this math is connected. And we'll see, two or three videos from now, it's connected in far more profound ways then you can possibly imagine.