If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## AP®︎/College Calculus AB

### Course: AP®︎/College Calculus AB>Unit 2

Lesson 8: Derivatives of cos(x), sin(x), 𝑒ˣ, and ln(x)

# Proof: The derivative of 𝑒ˣ is 𝑒ˣ

e, start superscript, x, end superscript is the only function that is the derivative of itself!
start fraction, d, divided by, d, x, end fraction, open bracket, e, start superscript, x, end superscript, close bracket, equals, e, start superscript, x, end superscript
(Well, actually, f, left parenthesis, x, right parenthesis, equals, 0 is also the derivative of itself, but it's not a very interesting function...)
The AP Calculus course doesn't require knowing the proof of this fact, but we believe that as long as a proof is accessible, there's always something to learn from it. In general, it's always good to require some kind of proof or justification for the theorems you learn.
Proof: The derivative of 𝑒ˣ is 𝑒ˣSee video transcript

## Want to join the conversation?

• At , how did the limit got inside the logarithm function? It is getting hard for me to make sense for this step. It is like saying lim (x -> 0) cos(x) = cos (lim x->0 x).
How is that possible?
Can this thing be only applied to logarithm functions or is it generic for other functions also like cos, sin etc? •   It's NOT a general rule, and I wish Sal spent some time explaining why it works in this particular case.

– – –

First of all, we're dealing with a composite function.

𝑓(𝑥) = 1∕ln 𝑥
𝑔(𝑥) = (1 + 𝑥)^(1∕𝑥)

𝑓(𝑔(𝑥)) = 1∕ln((1 + 𝑥)^(1∕𝑥))

In general terms we are looking for
𝐹 = lim(𝑛 → 0) 𝑓(𝑔(𝑛))

This means that we let 𝑛 approach zero, which makes 𝑔(𝑛) approach some limit 𝐺, which in turn makes 𝑓(𝑔(𝑛)) approach 𝐹.

In other words:
𝐺 = lim(𝑛 → 0) 𝑔(𝑛)
𝐹 = lim(𝑔(𝑛) → 𝐺) 𝑓(𝑔(𝑛)) = [let 𝑥 = 𝑔(𝑛)] = lim(𝑥 → 𝐺) 𝑓(𝑥)

Now, if we use our definitions of 𝑓(𝑥) and 𝑔(𝑥), we get
𝐺 = lim(𝑛 → 0) (1 + 𝑛)^(1∕𝑛) = [by definition] = 𝑒
𝐹 = lim(𝑥 → 𝑒) 1∕ln 𝑥 = [by direct substitution] = 1∕ln 𝑒 = 1

Note that 𝐹 was given to us by direct substitution, which means that in this particular case we have
lim(𝑥 → 𝐺) 𝑓(𝑥) = 𝑓(𝐺) = 𝑓(lim(𝑛 → 0) 𝑔(𝑛))

– – –

EDIT (10/28/21):
The reason this works is because lim 𝑥→0 𝑔(𝑥) = 𝑒 (i.e. the limit exists)
and𝑓(𝑥) is continuous at 𝑥 = 𝑒

According to the theorem for limits of composite functions we then have
lim 𝑥→0 𝑓(𝑔(𝑥)) = 𝑓(lim 𝑥→0 𝑔(𝑥))

Sal explains that theorem here:
• How can e^x be the only function that is the derivative of itself? Doesn't f(x) = 19e^x also satisfy this property? • When we say that the exponential function is the only derivative of itself we mean that in solving the differential equation f' = f. It's true that 19f = (19f)' but this isn't simplified; I can still pull the 19 out of the derivative and cancel both sides. You are correct in saying that the general solution is Ae^x where A is a real value; however, the "A" part isn't the main focus - the main focus is the exponential, since that's what varies and the constants don't.
• Where can I find the proof of limit as n→infinity (1+1/n)^n =e and limit as n→0 (1+n)^(1/n)=e? • how/why is (1+1/n)^n equal to (1+n)^(1/n)? Is this just a basic law of exponents • Think about it like this:

it is completely legal for us to define one variable as some amount of another variable. Therefore, we can say that n=1/u, for example.

Let's say n=1/u

and

(lim n-> inf) e= (1+1/n)^n

Now let's rewrite this in terms of u. The limit will be that u gets very small and approaches 0, because this will cause the fraction 1/u to become very large. For n=1/u: if n approaches infinity, u must approach 0 for both sides to approach infinity.

(lim u-> 0) (1+u)^(1/u) (I simplified 1/(1/u) to just u)

This, therefore, is equivalent to the other definition of e, because all we have done is described the variable in a new way without adding in or taking away anything from the original equation, just looking at it differently.
• At , is it that this is an application of the principle:

lim(x->a)[ f(g(x)) ] = f( lim(x->a)[g(x)] )

? • Technically, the function x^0-1 is its own derivative. • Hi - i am interested that sal says that e = (1+n)^1/n when I graphed y = (1+x)^1/x the graph converges to 1. What mistake have I made? • 1. at , can you rigorously prove Delta x->0 equals n->0?

2. at , which limit property allows:
lim n->0 ln ((1+n)^1/n) (denominator)
to become:
ln (lim n->0 (1+n)^1/n)?

Thanks. • 1. Ooo that's a hard one! I tried a proof using a method called the "epsilon-delta method" (which is the most rigorous method of proving literally anything related to limits) and it does seem to work. Here it is:

WARNING This proof is pretty long and exhaustive. I'd definitely recommend learning the epsilon-delta method first and then going over this. It's available later on in the course.

So, we have ∆x = ln(n+1). We are given the statement that as ∆x tends to zero, n tends to 0 as well. We can prove these two separately. We assume the statement is true. And if we can prove that both ∆x and ln(n+1) tend to the same number, we should be done.

Now, let's first prove that lim (∆x-->0) ∆x = 0. Now, if I let ε>0, I will have a δ>0 such that if |∆x-0|<ε, |∆x-0|<δ. We don't have to do much here as this inequality is true when δ=ε. So, we've proven the first limit.

Now, the second one is pretty hard. We have lim (n-->0) ln(n+1) = 0. Now again, if we have ε>0, we can find a δ>0 such that if |ln(n+1)-0|<ε, |n-0|<δ.

Now, we start with |ln(n+1)-0|<ε. We can write this as |ln(n+1)-ln(1)|<ε. This can be expanded to -ε<(ln(n+1)-ln(1))<ε. Now, using log properties, we get -ε<ln((n+1)/1)<ε. Taking the ln away by using e, we get e^(-ε)<(n+1)<e^(ε). Now, subtracting 1 on both sides, we have (e^(-ε)-1)<(n)<(e^(ε)-1). On simplifying the left side, we have ((1-e^ε)/e^(ε))<(n)<(e^(ε)-1). Taking a negative out of the left side, we have -((e^(ε)-1)/e^(ε))<(n)<(e^(ε)-1)

Now, to make this ugly thing look better, I'm gonna call ((e^(ε)-1)/e^(ε)) as δ_1 and (e^(ε)-1) as δ_2. So, we now have (-δ_1)<n<(δ_2). Much better to deal with!

Now, if you observe the expressions for δ_1 and δ_2, you can see that δ_1 < δ_2. Why? Because δ_1 is just δ_2, but divided by e^(ε), which makes it smaller. Think of this: If 10=10, and I divide the LHS by 3, I get 10/3 and 10. 10/3 is clearly smaller. Same logic here. So, now that we know δ_1 < δ_2, I'll take my main δ to be the minimum of the two, which is δ_1 (Always better to take the smaller value of δ). So, we now have -δ<n<δ and hence, |n|<δ, which is EXACTLY what we needed to prove (We needed to prove that such a δ value exists, and to get here, we took δ to be equal to δ_1, which was ((e^(ε)-1)/e^(ε)). Hence, this limit has also been proven. This proves that for the limits to be equal, ∆x and n both tend to 0 (They essentially imply each other i.e. if one is true, the other is too, but I just proved that they are true independently too!)

2. This uses the property of limits of composition functions.  