If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Maclaurin series of sin(x)

AP.CALC:
LIM‑8 (EU)
,
LIM‑8.E (LO)
,
LIM‑8.E.1 (EK)
,
LIM‑8.F (LO)
,
LIM‑8.F.2 (EK)
Approximating sin(x) with a Maclaurin series (which is like a Taylor polynomial centered at x=0 with infinitely many terms). It turns out that this series is exactly the same as the function itself! Created by Sal Khan.

## Want to join the conversation?

• How come he plugged in zero into all of the derivatives but not into the x's with exponents?
• The polynomial p(X) is a representation of a funtion f(x). SO if you wanted to find the value of cos(0.1) it would be almost impossible without a calculator to use f(0,1).

So instead they found a way to manipulate f(x) into a p(x) polynomial form. that is to say that the f(x) and p(x) graphs ( where p(x) is to the nth approximation) would look the same.

And if the graphs look the same, they will give you the same corresponding y values for whatever value of x you you need to find the answer for.

so you would be able to plug that (0.1) value into the p(x) polynomial ie: p(0.1) and find a value with relatively simple arithmetic.

eg: for f(x)= cos(x), what would the 4th approximation of cos(0.1) be?

p(x)= 1-x^2/2!+(0.1)^4/4! ( I just plugged 0.1 wherever there is an x )

p(0.1)=1-(0.1)^2/2+x^4/24 = 0.995....

the actual value of cos(0.1)=.999998476

which is pretty close, the acuracy will increase the higher the degree of your polynomial.

I hope that helped you Greg.
• How is Maclaurin and Taylor Series related to Complex Number's Polar Form, given that they share the same playlist?

I'm honestly not getting what the relation between these two might be.
• Ooooold question, but my guess is that :
1. complex numbers are deeply connected to trigonometry (as you can see when studying the polar form)
2. these series include the approximation for cos(x) and sin(x).

I'm doing a class on Complex Calculus and series are a part of the program.
• Is the maclaurin series related someway to the parity of mathematical functions?
• That's a valuable observation. If a function is even, then its Maclaurin series contains only terms with even exponents, and the same for odd functions.
• What are the applications of studying this in real life?
(1 vote)
• I've created a function on python that sums fifty terms of this Maclaurin series and returns the number rounded to 14 digits, this is my own home made sine calculator, I also did this for the cosine function.

If you don't know about python programming, See the computer science playlist, here is a link.

• What is the application of series? Can you use it to approximate the equation of an unknown function? Sorry if my question is stupid, I'm a bit lost; I'll go back and review the previous videos. :/
• This process can only be done on a known function, and it is very difficult unless the derivatives can be calculated easily or repeat. It is very useful for things like sine and cosine functions, where it is impossible to calculate things like sin(0.237) without using the taylor or maclaurin series or actually drawing a circle and taking measurements. This also has many applications in sound processing.
• Maclaurin Taylor Series at 0 for sinx - odd powers over odd factorials
Maclaurin Taylor Series at 0 for cosx - even powers over even factorials

Is this is any way related to the fact that sinx is an odd function while cosx is an even function?

Or are the two ideas completely irrelevant?
• Yeah. Odd power functions have odd symmetry and even power functions have even symmetry. So a polynomial comprised of only odd power functions still has odd symmetry and likewise for even. Since the cosine function has even symmetry its polynomial representation cannot have any odd powers and likewise for sine.
• So I'm wondering what presuppositions must be made in order to come up with these trigonometric equations in terms of only x. All I can see is that we have to know what the derivatives of these trig functions are. So could somebody tell me what presuppositions must be made and perhaps how to prove that they are true? I just never understood how we can know what a sine or cosine of an angle is without like measuring lengths of a triangle or something (which probably wouldn't be too accurate).
• Really, were not actually "knowing" the sine or cosine, we are getting a infinitely close polynomial approximation, that happens to have the same value of sine or cosine. It's like saying 1.999999999999999...=2, just a little more complicated.
(1 vote)
• Can you see the property of sin(x)^2 + cos(x)^2 = 1 using Taylor series?
• Great question!
In theory, you could if you find the taylor polynomial if "Sin(x)^2" and the Taylor polynomial of "Cos(x)^2", and if you add them all the terms except for the number "1" would cancel. So, this would verify that sin(x)^2+cos(x)^2 = 1
• I understand that you can essentially rewrite the functions by using this method but I don't understand why it has to be zero. From what I know a MacLaurin series is when x=0 but a Taylor series can be any other value or am I incorrect? What does that value represent? Also is there a proof somewhere of this method?
• You're right; the center doesn't have to be 0, it's just often very convenient to use 0 because it reduces the number of terms we have to handle. And yes, a Maclaurin series is just a particular kind of Taylor series that is centered at 0 (it's the same theorem).

That number, 0 or whatever you choose, represents the "center" of the series; it's the point around which we're building our successive approximations of the function. Taylor's formula is about creating better and better polynomial approximations of a function based on its derivative behavior at a given point; so we choose the point in which we're interested and construct a polynomial approximation around that point. When we make an approximation, we also have to consider what values of x allow that approximation to actually equal the desired value whenever we sum the infinite series. Sometimes the approximation will converge for all values of x, and sometimes it will only converge in a finite interval around the center that we choose; it depends on the function. But for the approximations that don't converge for all x, it's useful to be able to move our "window" of convergence so that it encompasses the value that we want to approximate.

Say we're approximating ln(e + 0.1). For one thing, we can't use a Maclaurin series because the function isn't even defined at 0. We might choose a Taylor series centered at x = e rather than at x = 1 because at x = 1, the approximation will only converge on the interval (0, 2), which doesn't include our value (about 2.8).

As far as proofs go, I'd check the Wikipedia page for Taylor's Theorem or http://math.stackexchange.com/questions/481661/simplest-proof-of-taylors-theorem .

I hope that helps.