If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Proof: The derivative of 𝑒ˣ is 𝑒ˣ

ex is the only function that is the derivative of itself!
ddx[ex]=ex
(Well, actually, f(x)=0 is also the derivative of itself, but it's not a very interesting function...)
The AP Calculus course doesn't require knowing the proof of this fact, but we believe that as long as a proof is accessible, there's always something to learn from it. In general, it's always good to require some kind of proof or justification for the theorems you learn.
Khan Academy video wrapper
Proof: The derivative of 𝑒ˣ is 𝑒ˣSee video transcript

Want to join the conversation?

  • starky sapling style avatar for user tejas_gondalia
    At , how did the limit got inside the logarithm function? It is getting hard for me to make sense for this step. It is like saying lim (x -> 0) cos(x) = cos (lim x->0 x).
    How is that possible?
    Can this thing be only applied to logarithm functions or is it generic for other functions also like cos, sin etc?
    (40 votes)
    Default Khan Academy avatar avatar for user
    • cacteye blue style avatar for user Jerry Nilsson
      It's NOT a general rule, and I wish Sal spent some time explaining why it works in this particular case.

      – – –

      First of all, we're dealing with a composite function.

      𝑓(𝑥) = 1∕ln 𝑥
      𝑔(𝑥) = (1 + 𝑥)^(1∕𝑥)

      𝑓(𝑔(𝑥)) = 1∕ln((1 + 𝑥)^(1∕𝑥))

      In general terms we are looking for
      𝐹 = lim(𝑛 → 0) 𝑓(𝑔(𝑛))

      This means that we let 𝑛 approach zero, which makes 𝑔(𝑛) approach some limit 𝐺, which in turn makes 𝑓(𝑔(𝑛)) approach 𝐹.

      In other words:
      𝐺 = lim(𝑛 → 0) 𝑔(𝑛)
      𝐹 = lim(𝑔(𝑛) → 𝐺) 𝑓(𝑔(𝑛)) = [let 𝑥 = 𝑔(𝑛)] = lim(𝑥 → 𝐺) 𝑓(𝑥)

      Now, if we use our definitions of 𝑓(𝑥) and 𝑔(𝑥), we get
      𝐺 = lim(𝑛 → 0) (1 + 𝑛)^(1∕𝑛) = [by definition] = 𝑒
      𝐹 = lim(𝑥 → 𝑒) 1∕ln 𝑥 = [by direct substitution] = 1∕ln 𝑒 = 1

      Note that 𝐹 was given to us by direct substitution, which means that in this particular case we have
      lim(𝑥 → 𝐺) 𝑓(𝑥) = 𝑓(𝐺) = 𝑓(lim(𝑛 → 0) 𝑔(𝑛))

      – – –

      EDIT (10/28/21):
      The reason this works is because lim 𝑥→0 𝑔(𝑥) = 𝑒 (i.e. the limit exists)
      and𝑓(𝑥) is continuous at 𝑥 = 𝑒

      According to the theorem for limits of composite functions we then have
      lim 𝑥→0 𝑓(𝑔(𝑥)) = 𝑓(lim 𝑥→0 𝑔(𝑥))

      Sal explains that theorem here:
      https://www.khanacademy.org/math/ap-calculus-ab/ab-limits-new/ab-1-5a/v/limits-of-composite-functions
      (81 votes)
  • duskpin ultimate style avatar for user jasminepandit
    How can e^x be the only function that is the derivative of itself? Doesn't f(x) = 19e^x also satisfy this property?
    (12 votes)
    Default Khan Academy avatar avatar for user
    • mr pants teal style avatar for user Moon Bears
      When we say that the exponential function is the only derivative of itself we mean that in solving the differential equation f' = f. It's true that 19f = (19f)' but this isn't simplified; I can still pull the 19 out of the derivative and cancel both sides. You are correct in saying that the general solution is Ae^x where A is a real value; however, the "A" part isn't the main focus - the main focus is the exponential, since that's what varies and the constants don't.
      (17 votes)
  • piceratops ultimate style avatar for user Ruize Wang
    Where can I find the proof of limit as n→infinity (1+1/n)^n =e and limit as n→0 (1+n)^(1/n)=e?
    (8 votes)
    Default Khan Academy avatar avatar for user
  • aqualine seed style avatar for user Dylan
    how/why is (1+1/n)^n equal to (1+n)^(1/n)? Is this just a basic law of exponents
    (7 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user Justin O'Dwyer
      Think about it like this:

      it is completely legal for us to define one variable as some amount of another variable. Therefore, we can say that n=1/u, for example.

      Let's say n=1/u

      and

      (lim n-> inf) e= (1+1/n)^n

      Now let's rewrite this in terms of u. The limit will be that u gets very small and approaches 0, because this will cause the fraction 1/u to become very large. For n=1/u: if n approaches infinity, u must approach 0 for both sides to approach infinity.

      (lim u-> 0) (1+u)^(1/u) (I simplified 1/(1/u) to just u)

      This, therefore, is equivalent to the other definition of e, because all we have done is described the variable in a new way without adding in or taking away anything from the original equation, just looking at it differently.
      (7 votes)
  • blobby green style avatar for user Andrew.Blais
    At , is it that this is an application of the principle:

    lim(x->a)[ f(g(x)) ] = f( lim(x->a)[g(x)] )

    ?
    (5 votes)
    Default Khan Academy avatar avatar for user
    • cacteye blue style avatar for user Jerry Nilsson
      Yes, with 𝑓(𝑥) = ln 𝑥 and 𝑔(𝑥) = (1 + 1∕𝑥)^𝑥
      we get 𝑓(𝑔(𝑥)) = ln(1 + 1∕𝑥)^𝑥

      Because the natural log function is continuous, we have
      lim[𝑥 → ∞] 𝑓(𝑔(𝑥)) = 𝑓(lim[𝑥 → ∞] 𝑔(𝑥))
      = ln(lim[𝑥 → ∞] (1 + 1∕𝑥)^𝑥)
      (4 votes)
  • boggle yellow style avatar for user Gustavo Sáez
    Technically, the function x^0-1 is its own derivative.
    (2 votes)
    Default Khan Academy avatar avatar for user
  • aqualine ultimate style avatar for user Liang
    1. at , can you rigorously prove Delta x->0 equals n->0?

    2. at , which limit property allows:
    lim n->0 ln ((1+n)^1/n) (denominator)
    to become:
    ln (lim n->0 (1+n)^1/n)?

    Thanks.
    (1 vote)
    Default Khan Academy avatar avatar for user
    • male robot donald style avatar for user Venkata
      1. Ooo that's a hard one! I tried a proof using a method called the "epsilon-delta method" (which is the most rigorous method of proving literally anything related to limits) and it does seem to work. Here it is:

      WARNING This proof is pretty long and exhaustive. I'd definitely recommend learning the epsilon-delta method first and then going over this. It's available later on in the course.

      So, we have $\Delta$x = ln(n+1). We are given the statement that as $\Delta$x tends to zero, n tends to 0 as well. We can prove these two separately. We assume the statement is true. And if we can prove that both $\Delta$x and ln(n+1) tend to the same number, we should be done.

      Now, let's first prove that $\lim\limits_{\Delta x \to 0} \Delta$x = 0. Now, if I let $\epsilon$>0, I will have a $\delta$>0 such that if $\mid \Delta x - 0 \mid < \epsilon$, $\mid \Delta x - 0 \mid < \delta$. We don't have to do much here as this inequality is true when $\delta = \epsilon$. So, we've proven the first limit.

      Now, the second one is pretty hard. We have $\lim\limits_{n \to 0} ln(n+1)=0$. Now again, if we have $\epsilon>0$, we can find a $\delta>0$ such that if $\mid ln(n+1) - 0 \mid < \epsilon$, $\mid n - 0 \mid < \delta$.

      Now, we start with $\mid ln(n+1)-0 \mid<\epsilon$. We can write this as $\mid ln(n+1) - ln(1)\mid< \epsilon$. This can be expanded to $-\epsilon < [ln(n+1)-ln(1)]<\epsilon$. Now, using log properties, we get $-\epsilon<ln(\frac{n+1}{1})<\epsilon$. Taking the ln away by using $e$, we get $e^{-\epsilon}<(n+1)<e^{\epsilon}$. Now, subtracting 1 on both sides, we have $e^{-\epsilon}-1 < n < e^{\epsilon}-1$. On simplifying the left side, we have $\frac{1-e^{\epsilon}}{e^{\epsilon}} < n < e^{\epsilon}-1$. Taking a negative out of the left side, we have $-\frac{e^{\epsilon}-1}{e^{\epsilon}} < n < e^{\epsilon}-1$.

      Now, to make this thing look better, I'm gonna call $\frac{e^{\epsilon}-1}{e^{\epsilon}}$ as $\delta_{1}$ and $e^{\epsilon}-1$ as $\delta_{2}$. So, we now have $-\delta_{1}<n<\delta_{2}$. Much better to deal with!

      Now, if you observe the expressions for $\delta_{1}$ and $\delta_{2}$, you can see that $\delta_{1}<\delta_{2}$. Why? Because $\delta_{1}$ is just $\delta_{2}$, but divided by $e^{\epsilon}$, which makes it smaller. Think of this: If $10=10$, and I divide the LHS by 3, I get $\frac{10}{3}$ and 10. $\frac{10}{3}$ is clearly smaller. Same logic here. So, now that we know $\delta_{1} < \delta_{2}$, I'll take my main $\delta$ to be the minimum of the two, which is $\delta_{1}$ (Always better to take the smaller value of $\delta$). So, we now have $-\delta<n<\delta$ and hence, $\mid n \mid < \delta$, which is EXACTLY what we needed to prove (We needed to prove that such a $\delta$ value exists, and to get here, we took $\delta$ to be equal to $\delta_{1}$, which was $\frac{e^{\epsilon}-1}{e^{\epsilon}}$. Hence, this limit has also been proven. This proves that for the limits to be equal, $\Delta{}$x and n both tend to 0 (They essentially imply each other i.e. if one is true, the other is too, but I just proved that they are true independently too!)

      2. This uses the property of limits of composition functions.
      (9 votes)
  • male robot hal style avatar for user scott.d.corwin
    When/where do we learn that change of variables method?
    (4 votes)
    Default Khan Academy avatar avatar for user
  • old spice man green style avatar for user James Birkin
    Hi - i am interested that sal says that e = (1+n)^1/n when I graphed y = (1+x)^1/x the graph converges to 1. What mistake have I made?
    (3 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Aosttpp
    at , I'm still not sure why e^x can be factored out of the equation.
    I've tried to review quite a few videos showing this rule as and still can't figure it out. Why can a number that isn't relevant to what we are trying to find can just be taken out? My instinct is to divide everything by e^x if that makes sense. If available I'd love to watch a video on this. Thanks!
    (3 votes)
    Default Khan Academy avatar avatar for user