Main content

### Course: AP®︎/College Calculus AB > Unit 2

Lesson 8: Derivatives of cos(x), sin(x), 𝑒ˣ, and ln(x)- Derivatives of sin(x) and cos(x)
- Worked example: Derivatives of sin(x) and cos(x)
- Derivatives of sin(x) and cos(x)
- Proving the derivatives of sin(x) and cos(x)
- Derivative of 𝑒ˣ
- Derivative of ln(x)
- Derivatives of 𝑒ˣ and ln(x)
- Proof: The derivative of 𝑒ˣ is 𝑒ˣ
- Proof: the derivative of ln(x) is 1/x

© 2024 Khan AcademyTerms of usePrivacy PolicyCookie Notice

# Proof: The derivative of 𝑒ˣ is 𝑒ˣ

(Well, actually, $f(x)=0$ is also the derivative of itself, but it's not a very interesting function...)

The AP Calculus course doesn't require knowing the proof of this fact, but we believe that as long as a proof is accessible, there's always something to learn from it. In general, it's always good to require some kind of proof or justification for the theorems you learn.

## Want to join the conversation?

- At7:23, how did the limit got inside the logarithm function? It is getting hard for me to make sense for this step. It is like saying lim (x -> 0) cos(x) = cos (lim x->0 x).

How is that possible?

Can this thing be only applied to logarithm functions or is it generic for other functions also like cos, sin etc?(47 votes)- It's NOT a general rule, and I wish Sal spent some time explaining why it works in this
*particular case.*

– – –

First of all, we're dealing with a*composite function.*

𝑓(𝑥) = 1∕ln 𝑥

𝑔(𝑥) = (1 + 𝑥)^(1∕𝑥)

⇒

𝑓(𝑔(𝑥)) = 1∕ln((1 + 𝑥)^(1∕𝑥))

In general terms we are looking for

𝐹 = lim(𝑛 → 0) 𝑓(𝑔(𝑛))

This means that we let 𝑛 approach zero, which makes 𝑔(𝑛) approach some limit 𝐺, which in turn makes 𝑓(𝑔(𝑛)) approach 𝐹.

In other words:

𝐺 = lim(𝑛 → 0) 𝑔(𝑛)

𝐹 = lim(𝑔(𝑛) → 𝐺) 𝑓(𝑔(𝑛)) = [let 𝑥 = 𝑔(𝑛)] = lim(𝑥 → 𝐺) 𝑓(𝑥)

Now, if we use our definitions of 𝑓(𝑥) and 𝑔(𝑥), we get

𝐺 = lim(𝑛 → 0) (1 + 𝑛)^(1∕𝑛) = [by definition] = 𝑒

𝐹 = lim(𝑥 → 𝑒) 1∕ln 𝑥 = [by direct substitution] = 1∕ln 𝑒 = 1

Note that 𝐹 was given to us by direct substitution, which means that in this*particular case*we have

lim(𝑥 → 𝐺) 𝑓(𝑥) = 𝑓(𝐺) = 𝑓(lim(𝑛 → 0) 𝑔(𝑛))

– – –

EDIT (10/28/21):

The reason this works is because lim 𝑥→0 𝑔(𝑥) = 𝑒 (i.e. the limit exists)

and𝑓(𝑥) is continuous at 𝑥 = 𝑒

According to the theorem for limits of composite functions we then have

lim 𝑥→0 𝑓(𝑔(𝑥)) = 𝑓(lim 𝑥→0 𝑔(𝑥))

Sal explains that theorem here:

https://www.khanacademy.org/math/ap-calculus-ab/ab-limits-new/ab-1-5a/v/limits-of-composite-functions(86 votes)

- How can e^x be the only function that is the derivative of itself? Doesn't f(x) = 19e^x also satisfy this property?(12 votes)
- When we say that the exponential function is the only derivative of itself we mean that in solving the differential equation f' = f. It's true that 19f = (19f)' but this isn't simplified; I can still pull the 19 out of the derivative and cancel both sides. You are correct in saying that the general solution is Ae^x where A is a real value; however, the "A" part isn't the main focus - the main focus is the exponential, since that's what varies and the constants don't.(17 votes)

- Where can I find the proof of limit as n→infinity (1+1/n)^n =e and limit as n→0 (1+n)^(1/n)=e?(8 votes)
- https://www.khanacademy.org/math/algebra2/exponential-and-logarithmic-functions/e-and-the-natural-logarithm/v/e-as-limit

or

https://mathcs.clarku.edu/~djoyce/ma122/elimit.pdf

The proof of the two formulas are the same:

lim_{n → ∞} (1 + 1/n)^n = lim_{1/n → 0} (1 + 1/n)^(n) = lim_{x → 0} (1 + x)^(1/x).(9 votes)

- how/why is (1+1/n)^n equal to (1+n)^(1/n)? Is this just a basic law of exponents(7 votes)
- Think about it like this:

it is completely legal for us to define one variable as some amount of another variable. Therefore, we can say that n=1/u, for example.

Let's say n=1/u

and

(lim n-> inf) e= (1+1/n)^n

Now let's rewrite this in terms of u. The limit will be that u gets very small and approaches 0, because this will cause the fraction 1/u to become very large. For n=1/u: if n approaches infinity, u must approach 0 for both sides to approach infinity.

(lim u-> 0) (1+u)^(1/u) (I simplified 1/(1/u) to just u)

This, therefore, is equivalent to the other definition of e, because all we have done is described the variable in a new way without adding in or taking away anything from the original equation, just looking at it differently.(7 votes)

- At7:23, is it that this is an application of the principle:

lim(x->a)[ f(g(x)) ] = f( lim(x->a)[g(x)] )

?(5 votes)- Yes, with 𝑓(𝑥) = ln 𝑥 and 𝑔(𝑥) = (1 + 1∕𝑥)^𝑥

we get 𝑓(𝑔(𝑥)) = ln(1 + 1∕𝑥)^𝑥

Because the natural log function is continuous, we have

lim[𝑥 → ∞] 𝑓(𝑔(𝑥)) = 𝑓(lim[𝑥 → ∞] 𝑔(𝑥))

= ln(lim[𝑥 → ∞] (1 + 1∕𝑥)^𝑥)(4 votes)

- 1. at5:10, can you rigorously prove Delta x->0 equals n->0?

2. at7:17, which limit property allows:

lim n->0 ln ((1+n)^1/n) (denominator)

to become:

ln (lim n->0 (1+n)^1/n)?

Thanks.(1 vote)- 1. Ooo that's a hard one! I tried a proof using a method called the "epsilon-delta method" (which is the most rigorous method of proving literally anything related to limits) and it does seem to work. Here it is:
**WARNING**This proof is pretty long and exhaustive. I'd definitely recommend learning the epsilon-delta method first and then going over this. It's available later on in the course.

So, we have $\Delta$x = ln(n+1). We are given the statement that as $\Delta$x tends to zero, n tends to 0 as well. We can prove these two separately. We assume the statement is true. And if we can prove that both $\Delta$x and ln(n+1) tend to the same number, we should be done.

Now, let's first prove that $\lim\limits_{\Delta x \to 0} \Delta$x = 0. Now, if I let $\epsilon$>0, I will have a $\delta$>0 such that if $\mid \Delta x - 0 \mid < \epsilon$, $\mid \Delta x - 0 \mid < \delta$. We don't have to do much here as this inequality is true when $\delta = \epsilon$. So, we've proven the first limit.

Now, the second one is pretty hard. We have $\lim\limits_{n \to 0} ln(n+1)=0$. Now again, if we have $\epsilon>0$, we can find a $\delta>0$ such that if $\mid ln(n+1) - 0 \mid < \epsilon$, $\mid n - 0 \mid < \delta$.

Now, we start with $\mid ln(n+1)-0 \mid<\epsilon$. We can write this as $\mid ln(n+1) - ln(1)\mid< \epsilon$. This can be expanded to $-\epsilon < [ln(n+1)-ln(1)]<\epsilon$. Now, using log properties, we get $-\epsilon<ln(\frac{n+1}{1})<\epsilon$. Taking the ln away by using $e$, we get $e^{-\epsilon}<(n+1)<e^{\epsilon}$. Now, subtracting 1 on both sides, we have $e^{-\epsilon}-1 < n < e^{\epsilon}-1$. On simplifying the left side, we have $\frac{1-e^{\epsilon}}{e^{\epsilon}} < n < e^{\epsilon}-1$. Taking a negative out of the left side, we have $-\frac{e^{\epsilon}-1}{e^{\epsilon}} < n < e^{\epsilon}-1$.

Now, to make this thing look better, I'm gonna call $\frac{e^{\epsilon}-1}{e^{\epsilon}}$ as $\delta_{1}$ and $e^{\epsilon}-1$ as $\delta_{2}$. So, we now have $-\delta_{1}<n<\delta_{2}$. Much better to deal with!

Now, if you observe the expressions for $\delta_{1}$ and $\delta_{2}$, you can see that $\delta_{1}<\delta_{2}$. Why? Because $\delta_{1}$ is just $\delta_{2}$, but divided by $e^{\epsilon}$, which makes it smaller. Think of this: If $10=10$, and I divide the LHS by 3, I get $\frac{10}{3}$ and 10. $\frac{10}{3}$ is clearly smaller. Same logic here. So, now that we know $\delta_{1} < \delta_{2}$, I'll take my main $\delta$ to be the minimum of the two, which is $\delta_{1}$ (Always better to take the smaller value of $\delta$). So, we now have $-\delta<n<\delta$ and hence, $\mid n \mid < \delta$, which is EXACTLY what we needed to prove (We needed to prove that such a $\delta$ value exists, and to get here, we took $\delta$ to be equal to $\delta_{1}$, which was $\frac{e^{\epsilon}-1}{e^{\epsilon}}$. Hence, this limit has also been proven. This proves that for the limits to be equal, $\Delta{}$x and n both tend to 0 (They essentially imply each other i.e. if one is true, the other is too, but I just proved that they are true independently too!)

2. This uses the property of limits of composition functions.(11 votes)

- Technically, the function x^0-1 is its own derivative.(2 votes)
- Any function of the form a·e^x is its own derivative, and these are the
*only*functions that are their own derivatives. The zero function is just the special case where a=0.(8 votes)

- "f(x)=0... is not a very interesting function"??... Blasphemy! If one does not appreciate the mystery and significance of Zero.. well, I can't be friends with said person.(5 votes)
- When/where do we learn that change of variables method?(4 votes)
- Hi - i am interested that sal says that e = (1+n)^1/n when I graphed y = (1+x)^1/x the graph converges to 1. What mistake have I made?(3 votes)
- What you may have missed is lim (n->0) for that definition. You are correct that lim (n->∞) (1+x)^1/x = 1, but lim (n->0) (1+x)^1/x = e.(3 votes)