Sequences, series, and function approximation
Maclaurin series and Euler's identity
None
Cosine Taylor series at 0 (Maclaurin)
Approximating f(x)=cos x using a Maclauren Series (special case of a Taylor series at x=0)
Discussion and questions for this video
Do you understand English? Click here to see more discussion happening on Khan Academy's English site.
 In the last video, we hopefully set up some of the intuition for why  or I should say what  the Maclaurin
 series is all about, and I said at the end of the videos that a Maclaurin series is just a special case
 of a Taylor series. In the case of a Maclaurin series, we're approximating
 this function around x is equal to 0, and a Taylor series, and we'll talk about that in a future video,
 you can pick an arbitrary x value  or f(x) value, we should say, around which to approximate the function.
 But with that said, let's just focus on Maclaurin, becuase to some degree it's a little bit simpler,
 and that by itself can lead us to some pretty profound conclusions about mathematics,
 and that's actually where I'm trying to get to.
 So let's take the Maclurin series of some interesting functions
 and I'm gonna do functions where it's pretty easy to take the derivatives, and you can
 /keep/ taking their derivatives over and over and over and over and over again.
 So let's take the Maclaurin series of cosine of x, so if f(x)=cos(x),
 then  before I even apply this formula, that we somewhat derived in the last video,
 or at least got the intuitive for in the last video 
 let's take a bunch of derivatives of f(x), just so we have a good sense of it.
 So, if we take the first derivative, if we take the first derivative, derivative of cos(x) = sin(x)
 if we take the derivative of that, if we take the derivative of that,
 derivative of sin(x) is cos(x), and we have the negative there, so it's cos(x)
 so if we take the derivative of that, so this is the third derivative of cos(x),
 now it's just going to be positive sine of x, and if we take the derivative of that,
 we get cos(x) again.
 We get cosine of x again. So if we take the derivative of that,
 this is the fourth derivative, I should, I should use this notation
 but you get the idea, we'll get cos(x) again.
 And if you look at what we talked about in the last video, we want the difference  we want
 the function, and we want it's various derivatives evaluated at 0,
 so let's evaluate it at 0. So f(0), cos(0) is 1, cosine of zero is one.
 Whether you're talking about zero radians or zero degrees, doesn't matter,
 sine of zero is zero, so this is f prime of  f prime of zero, is zero. And then cos(0)
 is, once again, one, but we have the negative out there, so it becomes negative one.
 So f  the second derivative evaluated at zero is negative one.
 Let's take the third derivative, the third derivative evaluated at zero
 well, sine of zero is just zero, and then the fourth derivative evaluated at zero,
 cosine of zero is one. So f prime prime prime at zero is now equal to one.
 So you see an interesting pattern here  one, zero, negative one, zero, one,
 then you go to zero, then you go to negative one, zero.
 So if we were to apply this to find it's Maclaurin representation, what would we get?
 Let me do my best attempt at this. So we would get, our polynomial would be 
 so our polynomial approximation of cosine of x is going to be f(0),
 f(0) is one, and then we have one plus f'(0) times x.
 But f'(0) is just zero, so we're not going to have this term over there, it's going to be
 zero times x, I won't even take the trouble of writing it down, it would be this zero
 time x, then plus f prime prime or second derivative, which is negative one,
 so I'll write negative  negative, this is a negative one right here,
 this is a negative one, times x squared, times x squared,
 over 2 factorial  over two factorial, which in this case is just going to be two.
 But I'll just write it down here as two factorial, to make the pattern a little bit more obvious,
 and then we go to the next term, the third derivative evaluated at zero
 but the third derivative evaluated at zero is just zero, so this term won't be there as well,
 then you go to the fourth derivative, the fourth derivative evaluated at zero is positive one,
 so this coefficient right here is going to be a one, and so you're going to have
 one times x to the fourth over four factorial, so plus x to the fourth over four factorial,
 and I think you start seeing a pattern now.
 You have sign switches  and you would see this if we kept going, so
 you can verify it for yourself if you don't believe me 
 so you have a positive sign, a negative sign, a positive sign, and then a
 negative sign, so on and so forth, and this is, uh, one times x to the zeroth power,
 then you jump two to x to the squared, jump two to x to the fourth, and
 so if we kept that up, we'd have a positive sign, now we have a negative sign,
 it would be x to the sixth over six factorial, then you have a positive sign,
 x to the eighth over eight factorial, and then you'd have a negative sign,
 x to the tenth over ten factorial, and you can just keep going that way.
 And if you kept going with this series, this would be the polynomial representation of
 cosine of x. And it's frankly just kind of cool that if can be represented this way.
 It's a pretty simple pattern here for a trigonometric function.
 Once again, it kind of tells you that all of this math is connected. And we'll see,
 two or three videos from now, it's connected in far more profound ways then you can possibly imagine.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?

Have something that's not a question about this content? 
This discussion area is not meant for answering homework questions.
I could understand the steps, end result, but a question as to how some function determined at 0 is valid over the full range. What I feel is since cosine is periodic function this holds true. Someone correct me if I am wrong.
Two points.
1. If you look at the pattern of your derivatives, you'll see that after 4 derivatives, it goes back to it's original derivative, which means that it will just continue to repeat this pattern no matter how far you go out.
2. It then has to do with the series representation. If you look at the series representation, you'll see that it's E ((1)^k)*x^2k)/(2k!). This is for all k's. You'll notice that the x is just x^2k. If it were (x^2k)3, then it would be centered at 3. The formula for "power" series is E an(xc)^n. If there is no "c" in the series representation, then the function is centered at 0.
Hope this made some sense.
1. If you look at the pattern of your derivatives, you'll see that after 4 derivatives, it goes back to it's original derivative, which means that it will just continue to repeat this pattern no matter how far you go out.
2. It then has to do with the series representation. If you look at the series representation, you'll see that it's E ((1)^k)*x^2k)/(2k!). This is for all k's. You'll notice that the x is just x^2k. If it were (x^2k)3, then it would be centered at 3. The formula for "power" series is E an(xc)^n. If there is no "c" in the series representation, then the function is centered at 0.
Hope this made some sense.
You have to remember that we are not just taking the value of the FUNCTION at 0, but also of all its derivatives, which basically means that you are given the slope of the function at 0, (and plotting the values of the slopes at a bunch of different points in the graph, you would end up with a derived function, which you could then take the first derivative of (i.e. the second derivative of the original function), and a derived function out of all the points you get from that, then take the slope of lots of points in THAT derived function, and you would get another derived function (i.e. the THIRD derivative of the original function.)) Anyway, you are given the function AND all these derivatives, so although a function value at a certain point may not, alone, be a valid representation for the entire function, the value AND ALL ITS DERIVATIVES will be a valid representation for the entire function.
I'm sorry if I confused you.
I'm sorry if I confused you.
So... the Taylor Series is a way to represent a function? :S
With Taylor and Maclaurin series you can approximate a function with a polynomial. This is useful because you can turn a complicated function (defined by a limit, for example) into simple multiplication and exponentiation of numbers. That's how your calculator gets the sine and cosine of angles almost immediately: It doesn't have a table of sines and cosines for all possible values, it approximates them using Taylor/Maclaurin series
it is a way to approximate a function, the more terms you have the better the approximation
This is supposed to approximate the function cos(x). Does it approximate the entire function all the way to infinity, or only near x=0?
There are methods for determining the maximum size of the error produced by an approximation like this, so you can figure out how many terms you need to have in order to get an approximation within the tolerance you desire. Hopefully Sal will cover those in an upcoming video. Also, cos x is periodic, so if you can get within your desired tolerance on, say, the interval from 0 <= x <= 2 pi, then you can just shift over any multiple of 2 pi to get your approximation if your x lies outside that int
How is Maclaurin and Taylor Series related to Complex Number's Polar Form, given that they share the same playlist?
I'm honestly not getting what the relation between these two might be.
I'm honestly not getting what the relation between these two might be.
Is it actually proven that the Maclaurin/Taylorrepresentations of these functions are EQUIVALENT to the corresponding functions if the number of terms in the representation goes to infinity?
Lot's of it! And it can be done in various ways, via L'Hopital, via induction, via application of the Mean Value Theorem:
Mean Value: http://www.math.harvard.edu/~pflueger/math1b/Lecture14.pdf
Induction: http://www.math.csusb.edu/faculty/pmclough/SPTT.pdf
"L'Hopital: http://en.wikipedia.org/wiki/Taylor%27s_theorem#Proof_for_Taylor.27s_theorem_in_one_real_variable
Mean Value: http://www.math.harvard.edu/~pflueger/math1b/Lecture14.pdf
Induction: http://www.math.csusb.edu/faculty/pmclough/SPTT.pdf
"L'Hopital: http://en.wikipedia.org/wiki/Taylor%27s_theorem#Proof_for_Taylor.27s_theorem_in_one_real_variable
Very cool concept but I'm wondering if you can do it in reverse, take an infinite polynomial and turn it into a simple function?
I transformed that last polynomial equation ( produced after the Mac Lauren series was applied ) and transformed it into a power series summation ( like Sal taught us on his last playlist), and I got :
Summation from n=0 to infinity of  ( x^2 / 2n! )^n , which is 1 / [ 1 + ( x^2/2n! )] with radius of convergence x < sqrt( 2n! ).
So my question is, is that correct? Can we represent cos(x) three different ways (MacLaurin polynomial, power series(sigma notation), power series summation after applying the formula 1 / 1r ) ??
Summation from n=0 to infinity of  ( x^2 / 2n! )^n , which is 1 / [ 1 + ( x^2/2n! )] with radius of convergence x < sqrt( 2n! ).
So my question is, is that correct? Can we represent cos(x) three different ways (MacLaurin polynomial, power series(sigma notation), power series summation after applying the formula 1 / 1r ) ??
Why exactly does taking derivatives at a point give you the function of the polynomial? I don't get the part of how the 3rd,4th,5th etc. derivative will attempt to match the slope at any point other than at zero, is it because the slope of the slope at zero approximates the points around it?
Here's the basic idea: Hm, I wonder if I can make a polynomial that has all of the same derivatives at a point as a particular function? Look at that, now we can.
And here's a bonus: it even looks like the function! In fact, for certain functions, I can "tailor" (pun intended) an infinite polynomial that exactly equals the function at all values of x just by this process of creating a polynomial with the same derivatives as my function.
And here's a bonus: it even looks like the function! In fact, for certain functions, I can "tailor" (pun intended) an infinite polynomial that exactly equals the function at all values of x just by this process of creating a polynomial with the same derivatives as my function.
Think about how the first derivative gives you the slope at a point, and then the second derivative gives you the concavity of the function, and the third derivative will give you the concavity of the first derivative, and so on. Also, remember from the first video that we assume the function is infinitely differentiable.
Another video in this series will probably help more: https://www.khanacademy.org/math/calculus/sequences_series_approx_calc/maclaurin_taylor/v/visualizingtaylorseriesapproximations
Another video in this series will probably help more: https://www.khanacademy.org/math/calculus/sequences_series_approx_calc/maclaurin_taylor/v/visualizingtaylorseriesapproximations
If we define the MacLaurin series for cos x as ∑ from n = 0 to infinity of (1)^n * x^(2n)/(2n!) wouldn't we artificially take x = 0 out of the domain because 0^0 is not defined? I know this notation isn't used in this video but I've seen it used before, including in Khan Academy exercises.
Most mathematicians assert that 0⁰ = 1, though this has not been established by a formal proof. And this is in fact used in the Maclaurin Series for cos x, such that the first term is 1 no matter what x is equal to.
At around 2:15 he states that the steps work whether you are using radians or degrees, but it doesn't seem reasonable that the polynomial will give you the same answer whether you use x = pi/2 or x=90. Is there some reason that you makes this approximation work specifically with radians?
That is only because you are dealing with an angle of 0, which is the same in radians and degrees. For any other angle, you need to use radians. At this level of study, degrees should not be used at all.
If calculators use a MacLauren series to approximate trig functions, how come when I graph cos x it takes a couple seconds, but when I graph the first 10 terms of it's MacLauren series it takes around half a minute? Does it have to do with the fact that trig functions are periodic, and hence the series only needs to be accurate between pi and pi and then can be horizontally translated to find the cosine of angles outside that domain?
Calculators use a method called CORDIC to compute the trig functions. While this method is quite good for the way that computers process numbers, it isn't really very good for the way that humans do computations by hand. So, that is why it is not routinely taught in math classes.
So, no, calculators do not use Maclaurin series for the trig functions.
So, no, calculators do not use Maclaurin series for the trig functions.
I feel like there is a gap in the videos here between this video and the previous Maclaurin and Taylor intuiton video. I am not sure where the p(x) Maclaurin form or the purpose came from. It is just kind of presented as if you are already familiar with it. Any insight?
Why when we approximate the cosine function with polynomials it works so much better than approximating the sine function? (I can send my approximations in the graph I made, email me if possible)
It also depends on what order term you use for an approximation. At order 1 (linear approximation), you get a better approximation for sine than cosine (y = x works better for sine than y = 1 works for cosine). But if you include second order terms (quadratic approximation) you do better with cosine because sine doesn't have a second order term.
How fast any particular Taylor series converges depends on the value of x. Thus, the number of terms you need to get the series to converge to an acceptable number of number of decimal places, depends on what value the x is.
But, the Taylor series for both cosine and sine converge very quickly, so I am not sure what you mean.
But, the Taylor series for both cosine and sine converge very quickly, so I am not sure what you mean.
please i could not solve these questions , to expand the function f(x)= Cos2 3x at x0=0
Why would I want to approximate the value of sin(x) or e^x when I can find an exact values?
These functions are simple enough. But more complex functions are better approximated as polynomials.
"when I can find an exact value"
You can? How?
You can? How?
sum of cos (2kpi/(2n+1))= 1/2 from [k=1, n]
How can cos(x) in degrees and cos(x) in radians have the same serie if one finish its cycle at 2 pi and the other in 360. That would mean that if you plug pi to the serie it will give the same result as 180!!
MADNESS I SAY!
MADNESS I SAY!
Discuss the site
View general discussions about Khan Academy.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
abuse
 disrespectful or offensive
 an advertisement
not helpful
 low quality
 not about the video topic
 soliciting votes or seeking badges
 a homework question
 a duplicate answer
 repeatedly making the same post
wrong category
 a tip or thanks in Questions
 a question in Tips & Thanks
 an answer that should be its own question
about the site
 a question about Khan Academy
 a post about badges
 a technical problem with the site
 a request for features
Your SpinOffs
Your SpinOffs
Share a tip
Thank the author
Have something that's not a tip or thanks about this content?
This discussion area is not meant for answering homework questions.