Main content

## AP®︎/College Calculus AB

### Course: AP®︎/College Calculus AB > Unit 2

Lesson 8: Derivatives of cos(x), sin(x), 𝑒ˣ, and ln(x)- Derivatives of sin(x) and cos(x)
- Worked example: Derivatives of sin(x) and cos(x)
- Derivatives of sin(x) and cos(x)
- Proving the derivatives of sin(x) and cos(x)
- Derivative of 𝑒ˣ
- Derivative of ln(x)
- Derivatives of 𝑒ˣ and ln(x)
- Proof: The derivative of 𝑒ˣ is 𝑒ˣ
- Proof: the derivative of ln(x) is 1/x

© 2023 Khan AcademyTerms of usePrivacy PolicyCookie Notice

# Proof: The derivative of 𝑒ˣ is 𝑒ˣ

AP.CALC:

FUN‑3 (EU)

, FUN‑3.A (LO)

, FUN‑3.A.4 (EK)

e, start superscript, x, end superscript is the only function that is the derivative of itself!

(Well, actually, f, left parenthesis, x, right parenthesis, equals, 0 is also the derivative of itself, but it's not a very interesting function...)

The AP Calculus course doesn't require knowing the proof of this fact, but we believe that as long as a proof is accessible, there's always something to learn from it. In general, it's always good to require some kind of proof or justification for the theorems you learn.

## Want to join the conversation?

- At7:23, how did the limit got inside the logarithm function? It is getting hard for me to make sense for this step. It is like saying lim (x -> 0) cos(x) = cos (lim x->0 x).

How is that possible?

Can this thing be only applied to logarithm functions or is it generic for other functions also like cos, sin etc?(35 votes)- It's NOT a general rule, and I wish Sal spent some time explaining why it works in this
*particular case.*

– – –

First of all, we're dealing with a*composite function.*

𝑓(𝑥) = 1∕ln 𝑥

𝑔(𝑥) = (1 + 𝑥)^(1∕𝑥)

⇒

𝑓(𝑔(𝑥)) = 1∕ln((1 + 𝑥)^(1∕𝑥))

In general terms we are looking for

𝐹 = lim(𝑛 → 0) 𝑓(𝑔(𝑛))

This means that we let 𝑛 approach zero, which makes 𝑔(𝑛) approach some limit 𝐺, which in turn makes 𝑓(𝑔(𝑛)) approach 𝐹.

In other words:

𝐺 = lim(𝑛 → 0) 𝑔(𝑛)

𝐹 = lim(𝑔(𝑛) → 𝐺) 𝑓(𝑔(𝑛)) = [let 𝑥 = 𝑔(𝑛)] = lim(𝑥 → 𝐺) 𝑓(𝑥)

Now, if we use our definitions of 𝑓(𝑥) and 𝑔(𝑥), we get

𝐺 = lim(𝑛 → 0) (1 + 𝑛)^(1∕𝑛) = [by definition] = 𝑒

𝐹 = lim(𝑥 → 𝑒) 1∕ln 𝑥 = [by direct substitution] = 1∕ln 𝑒 = 1

Note that 𝐹 was given to us by direct substitution, which means that in this*particular case*we have

lim(𝑥 → 𝐺) 𝑓(𝑥) = 𝑓(𝐺) = 𝑓(lim(𝑛 → 0) 𝑔(𝑛))

– – –

EDIT (10/28/21):

The reason this works is because lim 𝑥→0 𝑔(𝑥) = 𝑒 (i.e. the limit exists)

and𝑓(𝑥) is continuous at 𝑥 = 𝑒

According to the theorem for limits of composite functions we then have

lim 𝑥→0 𝑓(𝑔(𝑥)) = 𝑓(lim 𝑥→0 𝑔(𝑥))

Sal explains that theorem here:

https://www.khanacademy.org/math/ap-calculus-ab/ab-limits-new/ab-1-5a/v/limits-of-composite-functions(67 votes)

- How can e^x be the only function that is the derivative of itself? Doesn't f(x) = 19e^x also satisfy this property?(13 votes)
- When we say that the exponential function is the only derivative of itself we mean that in solving the differential equation f' = f. It's true that 19f = (19f)' but this isn't simplified; I can still pull the 19 out of the derivative and cancel both sides. You are correct in saying that the general solution is Ae^x where A is a real value; however, the "A" part isn't the main focus - the main focus is the exponential, since that's what varies and the constants don't.(16 votes)

- Where can I find the proof of limit as n→infinity (1+1/n)^n =e and limit as n→0 (1+n)^(1/n)=e?(8 votes)
- https://www.khanacademy.org/math/algebra2/exponential-and-logarithmic-functions/e-and-the-natural-logarithm/v/e-as-limit

or

https://mathcs.clarku.edu/~djoyce/ma122/elimit.pdf

The proof of the two formulas are the same:

lim_{n → ∞} (1 + 1/n)^n = lim_{1/n → 0} (1 + 1/n)^(n) = lim_{x → 0} (1 + x)^(1/x).(9 votes)

- how/why is (1+1/n)^n equal to (1+n)^(1/n)? Is this just a basic law of exponents(7 votes)
- Think about it like this:

it is completely legal for us to define one variable as some amount of another variable. Therefore, we can say that n=1/u, for example.

Let's say n=1/u

and

(lim n-> inf) e= (1+1/n)^n

Now let's rewrite this in terms of u. The limit will be that u gets very small and approaches 0, because this will cause the fraction 1/u to become very large. For n=1/u: if n approaches infinity, u must approach 0 for both sides to approach infinity.

(lim u-> 0) (1+u)^(1/u) (I simplified 1/(1/u) to just u)

This, therefore, is equivalent to the other definition of e, because all we have done is described the variable in a new way without adding in or taking away anything from the original equation, just looking at it differently.(6 votes)

- At7:23, is it that this is an application of the principle:

lim(x->a)[ f(g(x)) ] = f( lim(x->a)[g(x)] )

?(5 votes)- Yes, with 𝑓(𝑥) = ln 𝑥 and 𝑔(𝑥) = (1 + 1∕𝑥)^𝑥

we get 𝑓(𝑔(𝑥)) = ln(1 + 1∕𝑥)^𝑥

Because the natural log function is continuous, we have

lim[𝑥 → ∞] 𝑓(𝑔(𝑥)) = 𝑓(lim[𝑥 → ∞] 𝑔(𝑥))

= ln(lim[𝑥 → ∞] (1 + 1∕𝑥)^𝑥)(5 votes)

- Technically, the function x^0-1 is its own derivative.(2 votes)
- Any function of the form a·e^x is its own derivative, and these are the
*only*functions that are their own derivatives. The zero function is just the special case where a=0.(8 votes)

- Hi - i am interested that sal says that e = (1+n)^1/n when I graphed y = (1+x)^1/x the graph converges to 1. What mistake have I made?(3 votes)
- What you may have missed is lim (n->0) for that definition. You are correct that lim (n->∞) (1+x)^1/x = 1, but lim (n->0) (1+x)^1/x = e.(3 votes)

- 1. at5:10, can you rigorously prove Delta x->0 equals n->0?

2. at7:17, which limit property allows:

lim n->0 ln ((1+n)^1/n) (denominator)

to become:

ln (lim n->0 (1+n)^1/n)?

Thanks.(2 votes)- 1. Ooo that's a hard one! I tried a proof using a method called the "epsilon-delta method" (which is the most rigorous method of proving literally anything related to limits) and it does seem to work. Here it is:
**WARNING**This proof is pretty long and exhaustive. I'd definitely recommend learning the epsilon-delta method first and then going over this. It's available later on in the course.

So, we have ∆x = ln(n+1). We are given the statement that as ∆x tends to zero, n tends to 0 as well. We can prove these two separately. We assume the statement is true. And if we can prove that both ∆x and ln(n+1) tend to the same number, we should be done.

Now, let's first prove that lim (∆x-->0) ∆x = 0. Now, if I let ε>0, I will have a δ>0 such that if |∆x-0|<ε, |∆x-0|<δ. We don't have to do much here as this inequality is true when δ=ε. So, we've proven the first limit.

Now, the second one is pretty hard. We have lim (n-->0) ln(n+1) = 0. Now again, if we have ε>0, we can find a δ>0 such that if |ln(n+1)-0|<ε, |n-0|<δ.

Now, we start with |ln(n+1)-0|<ε. We can write this as |ln(n+1)-ln(1)|<ε. This can be expanded to -ε<(ln(n+1)-ln(1))<ε. Now, using log properties, we get -ε<ln((n+1)/1)<ε. Taking the ln away by using e, we get e^(-ε)<(n+1)<e^(ε). Now, subtracting 1 on both sides, we have (e^(-ε)-1)<(n)<(e^(ε)-1). On simplifying the left side, we have ((1-e^ε)/e^(ε))<(n)<(e^(ε)-1). Taking a negative out of the left side, we have -((e^(ε)-1)/e^(ε))<(n)<(e^(ε)-1)

Now, to make this ugly thing look better, I'm gonna call ((e^(ε)-1)/e^(ε)) as δ_1 and (e^(ε)-1) as δ_2. So, we now have (-δ_1)<n<(δ_2). Much better to deal with!

Now, if you observe the expressions for δ_1 and δ_2, you can see that δ_1 < δ_2. Why? Because δ_1 is just δ_2, but divided by e^(ε), which makes it smaller. Think of this: If 10=10, and I divide the LHS by 3, I get 10/3 and 10. 10/3 is clearly smaller. Same logic here. So, now that we know δ_1 < δ_2, I'll take my main δ to be the minimum of the two, which is δ_1 (Always better to take the smaller value of δ). So, we now have -δ<n<δ and hence, |n|<δ, which is EXACTLY what we needed to prove (We needed to prove that such a δ value exists, and to get here, we took δ to be equal to δ_1, which was ((e^(ε)-1)/e^(ε)). Hence, this limit has also been proven. This proves that for the limits to be equal, ∆x and n both tend to 0 (They essentially imply each other i.e. if one is true, the other is too, but I just proved that they are true independently too!)

2. This uses the property of limits of composition functions.(5 votes)

- When/where do we learn that change of variables method?(3 votes)
- At3:35, Sal came up with n . Can the whole proof be shown without this n ? Why did he came up with this idea and not something else ?(3 votes)