If you're seeing this message, it means we're having trouble loading external resources for Khan Academy.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Cosine Taylor series at 0 (Maclaurin)

Approximating f(x)=cos x using a Maclauren Series (special case of a Taylor series at x=0)
Back

Cosine Taylor series at 0 (Maclaurin)

Discussion and questions for this video
I could understand the steps, end result, but a question as to how some function determined at 0 is valid over the full range. What I feel is since cosine is periodic function this holds true. Someone correct me if I am wrong.
Two points.
1. If you look at the pattern of your derivatives, you'll see that after 4 derivatives, it goes back to it's original derivative, which means that it will just continue to repeat this pattern no matter how far you go out.
2. It then has to do with the series representation. If you look at the series representation, you'll see that it's E ((-1)^k)*x^2k)/(2k!). This is for all k's. You'll notice that the x is just x^2k. If it were (x^2k)-3, then it would be centered at 3. The formula for "power" series is E an(x-c)^n. If there is no "c" in the series representation, then the function is centered at 0.

Hope this made some sense.
So... the Taylor Series is a way to represent a function? :S
With Taylor and Maclaurin series you can approximate a function with a polynomial. This is useful because you can turn a complicated function (defined by a limit, for example) into simple multiplication and exponentiation of numbers. That's how your calculator gets the sine and cosine of angles almost immediately: It doesn't have a table of sines and cosines for all possible values, it approximates them using Taylor/Maclaurin series
This is supposed to approximate the function cos(x). Does it approximate the entire function all the way to infinity, or only near x=0?
There are methods for determining the maximum size of the error produced by an approximation like this, so you can figure out how many terms you need to have in order to get an approximation within the tolerance you desire. Hopefully Sal will cover those in an upcoming video. Also, cos x is periodic, so if you can get within your desired tolerance on, say, the interval from 0 <= x <= 2 pi, then you can just shift over any multiple of 2 pi to get your approximation if your x lies outside that int
How is Maclaurin and Taylor Series related to Complex Number's Polar Form, given that they share the same playlist?

I'm honestly not getting what the relation between these two might be.
I transformed that last polynomial equation ( produced after the Mac Lauren series was applied ) and transformed it into a power series summation ( like Sal taught us on his last playlist), and I got :

Summation from n=0 to infinity of - ( x^2 / 2n! )^n , which is 1 / [ 1 + ( x^2/2n! )] with radius of convergence x < sqrt( 2n! ).

So my question is, is that correct? Can we represent cos(x) three different ways (MacLaurin polynomial, power series(sigma notation), power series summation after applying the formula 1 / 1-r ) ??
Very cool concept but I'm wondering if you can do it in reverse, take an infinite polynomial and turn it into a simple function?
Why exactly does taking derivatives at a point give you the function of the polynomial? I don't get the part of how the 3rd,4th,5th etc. derivative will attempt to match the slope at any point other than at zero, is it because the slope of the slope at zero approximates the points around it?
Think about how the first derivative gives you the slope at a point, and then the second derivative gives you the concavity of the function, and the third derivative will give you the concavity of the first derivative, and so on. Also, remember from the first video that we assume the function is infinitely differentiable.

Another video in this series will probably help more: https://www.khanacademy.org/math/calculus/sequences_series_approx_calc/maclaurin_taylor/v/visualizing-taylor-series-approximations
Why would I want to approximate the value of sin(x) or e^x when I can find an exact values?
Why when we approximate the cosine function with polynomials it works so much better than approximating the sine function? (I can send my approximations in the graph I made, email me if possible)
It also depends on what order term you use for an approximation. At order 1 (linear approximation), you get a better approximation for sine than cosine (y = x works better for sine than y = 1 works for cosine). But if you include second order terms (quadratic approximation) you do better with cosine because sine doesn't have a second order term.
please i could not solve these questions , to expand the function f(x)= Cos2 3x at x0=0
sum of cos (2kpi/(2n+1))= -1/2 from [k=1, n]
How can cos(x) in degrees and cos(x) in radians have the same serie if one finish its cycle at 2 pi and the other in 360. That would mean that if you plug pi to the serie it will give the same result as 180!!
MADNESS I SAY!
Discuss the site

For general discussions about Khan Academy, click here.


Flag inappropriate posts

Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.

abuse
  • disrespectful or offensive
  • an advertisement
not helpful
  • low quality
  • not about the video topic
  • soliciting votes or seeking badges
  • a homework question
  • a duplicate answer
  • repeatedly making the same post
wrong category
  • a tip or thanks in Questions
  • a question in Tips & Thanks
  • an answer that should be its own question
about the site
Your Spin-Offs