If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Computing powers of a number

Although JavaScript has a builtin pow function that computes powers of a number, you can write a similar function recursively, and it can be very efficient. The only hitch is that the exponent has to be an integer.
Suppose you want to compute xn, where x is any real number and n is any integer. It's easy if n is 0, since x0=1 no matter what x is. That's a good base case.
So now let's see what happens when n is positive. Let's start by recalling that when you multiply powers of x, you add the exponents: xaxb=xa+b for any base x and any exponents a and b. Therefore, if n is positive and even, then xn=xn/2xn/2. If you were to compute y=xn/2 recursively, then you could compute xn as yy. What if n is positive and odd? Then xn=xn1x, and n1 either is 0 or is positive and even. We just saw how to compute powers of x when the exponent either is 0 or is positive and even. Therefore, you could compute xn1 recursively, and then use this result to compute xn=xn1x.
What about when n is negative? Then xn=1/xn, and the exponent n is positive, since it's the negation of a negative number. So you can compute xn recursively and take its reciprocal.
Putting these observations together, we get the following recursive algorithm for computing xn:
  • The base case is when n=0, and x0=1.
  • If n is positive and even, recursively compute y=xn/2, and then xn=yy. Notice that you can get away with making just one recursive call in this case, computing xn/2 just once, and then you multiply the result of this recursive call by itself.
  • If n is positive and odd, recursively compute xn1, so that the exponent either is 0 or is positive and even. Then, xn=xn1x.
  • If n is negative, recursively compute xn, so that the exponent becomes positive. Then, xn=1/xn.

This content is a collaboration of Dartmouth Computer Science professors Thomas Cormen and Devin Balkcom, plus the Khan Academy computing curriculum team. The content is licensed CC-BY-NC-SA.

Want to join the conversation?

  • blobby green style avatar for user seanmcolby
    I understand we are trying to learn how to do recursive functions (so practicing it is worthwhile no matter what), but in this case is the use of a recursive function beneficial? In other words, is this simply an exercise, or is the recursive implementation actually better than the alternatives?

    (An alternative could be to just loop through n, multiplying by x each time)
    (20 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      Multiplying x n times will require n steps, but the recursion only requires log_2(n) steps.

      That being said, this recursion (as is the case with all recursions) can be converted to an iterative algorithm. And (as is the case with all recursions) the iterative version will save time and memory.

      Basically, the iterative version is:
      1) multipiier = x
      2) result = 1
      3) Start at the rightmost bit in the binary representation of n
      4) If the bit in the binary representation of n is set then result=result * multiplier
      5) multiplier= multiplier * multiplier
      6) Move left 1 bit. If no more bits left, then exit and return result
      7) Go to step 4

      Recursive versions of algorithms are usually easy to write, but if we are worried about speed or memory we try to convert them to iterative versions. One big concern is the recursion depth (how many times the algorithm calls itself) . If the depth is small, recursive algorithms are often a good solution. If the recursion depth is large, then running out of stack memory becomes a real concern.
      (58 votes)
  • mr pants teal style avatar for user Antoine LeBel
    Just my 2 cents but I think the algorithm can be simplier :

    var power = function(x,n) {
    if(n === 0) {
    return 1;
    }

    if(n < 0) {
    return 1 / (power(x, abs(n))); //Make sure it's float AND abs
    }

    return x * power(x, n-1);
    }

    (Check Cameron answer) There's no need for this odd / even thing or does it make the algorithm faster? I can't get why then... As far as I can tell my algorithm works. Actually, by the Big Theta theory it's seems to me that both my algorithm and the challenge are Θ(n). Still, I do understand that it's dividing the recursive call by 2 when is even, which is pretty smart.
    (6 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      For convenience, let's call the algorithm that you have proposed Algorithm A, and the algorithm proposed in the article Algorithm B.

      Algorithm A is O(n) while Algorithm B is, in fact, O( log(n) ).
      This makes a big difference if n is large.

      The difference between Algorithm A and Algorithm B are analogous to the differences between linear search and binary search.
      Algorithm A and linear search only reduce the size of their problem by 1 after each iteration/recursion.
      On the other hand, Algorithm B and binary search, roughly speaking, reduce the size of their problem in half each iteration/recursion.

      But Algorithm B doesn't always reduce its problem size in half. It only reduces it in half when n is even. So how can we claim that is O(log(n)) ?
      If n was a power of 2 then we would halve the problem every recursion. It would take:
      log(n) multiplications
      If the number is odd we have to perform 1 extra multiplication, to make our problem even, before we can reduce our problem size in half. So even if, after every time we reduce our problem in half, the exponent was odd, it would only take:
      2 * log(n) multiplications
      But 2 * log(n) is still O(log n)

      (For the sake of accuracy, I should mention that, in the above analysis I used pseudo asymptotic times for sake of clarity. For the real asymptotic time, the value of n, should be measured in bits required to represent the decimal value of n)

      Hope this makes sense
      (55 votes)
  • blobby green style avatar for user p.mrutu
    Why not also use x^n=x^(n-1) x for when n is positive and even,why does the recursive case need to be different?
    (6 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      Short Answer :
      We don't want to use x^n = x^(n-1) * x for the positive even case, because it is inefficient.

      Long Answer :

      If we use: x^n = x^(n-1) * x
      Then:
      -we use 1 multiplication
      -we reduce the size of our problem by 1 (we still have to figure out the value of x^(n-1))

      If we use x^n = x^(n/2) * x^(n/2)
      Then:
      -we use 1 multiplication
      -we reduce the size of our problem in half (we still have to figure out the value of x^(n/2))

      We can quickly see that if n was a power of two then:
      -the top method would require ~n multiplications
      -the bottom method would only require ~log_2(n) multiplications

      e.g. n=8
      Top method:
      x^8= x^7 * x (need to find x^7)
      x^7= x^6 * x (need to find x^6)
      x^6= x^5 * x (need to find x^5)
      x^5= x^4 * x (need to find x^4)
      x^4= x^3 * x (need to find x^3)
      x^3= x^2 * x (need to find x^2)
      x^2= x^1 * x (need to find x^1)
      x^1= x^0 * x (need to find x^0)
      x^0 = 1

      Bottom method:
      x^8= x^4 * x^4 (need to find x^4)
      x^4= x^2 * x^2 (need to find x^2)
      x^2= x^1 * x^1 (need to find x^1)
      x^1= x^0 * x (need to find x^0)
      x^0 = 1

      We can see that the bottom method is much more efficient than the top method.

      Hope this makes sense
      (33 votes)
  • spunky sam green style avatar for user Kayode Adekanye
    When implementing the negative part of Recursive powers challenge , I did this :
    if( n < 0 ) {
    return 1 / (x * power(x, n+1));
    }

    and got the expected output but did not pass the test until I did this:
    if( n < 0 ) {
    return 1 / ( power(x, -1*n) );
    }

    *Why is that? *
    (5 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      The first method is inefficient since it only reduces the problem size by 1 each recursive call.
      The second method is fast (if you did the code for positive exponents correctly), since it reduces the problem size by ~1/2 its current size each recursive call.
      So for something like x^-32 the first method would require 32 calls, but the second would require 6 calls.
      (13 votes)
  • hopper cool style avatar for user TomBartul
    It seems hard to spot when you could implement the principle of recursion.
    The rules for when it works are simple, but heck if I could easily spot the opportunity to use it. Any tips?
    (5 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      Some problems that may be solved by using recursion:
      -You have a large problem , which you could solve if only you had a magic box which would give you the answer to smaller problems of the same form as the larger problem

      e.g.
      factorials
      I could solve factorial(n), if I had the answer to factorial(n-1), because:
      factorial(n)= n * factorial(n-1)
      Note: larger problem, solved by smaller problem of the same form
      This is a good candidate for recursion

      e.g.
      searching for a number in a sorted array
      I could find the index of a target value in a sorted array if I had the results of searches for the target value on the lower half of the array, and the upper half of the array
      This is a recursive version of a binary search

      When recursion may be a bad idea:
      -If the combined effort of solving the sub problems is more effort than solving the larger problem
      e.g. Fibonacci numbers
      (10 votes)
  • leaf green style avatar for user AT
    Not sure about negative case..
    (2 votes)
    Default Khan Academy avatar avatar for user
  • ohnoes default style avatar for user Tetyana Silvestrova
    I have a question concerning the efficiency of this recursive function relative to the iterative version of it. So if this is a recursive function, it should perform with O(log(n)) complexity while iterative function should have O(n) complexity, if I am not mistaken. But when I try to write both programs in java measuring the time of the execution with System.nanoTime();, it appears that iterative function is 1000 times quicker than the recursive function(iterative=roughly 500 nanoseconds, recursive=roughly 400 000 nanoseconds).
    My iterative function looks like:
    private static long computePowerIteratively(int pow, int num) {
    long startTime = System.nanoTime();

    long result = 1;
    if (pow == 0) {
    return 1;
    } else if (pow<0) {
    for (int i = 1; i <= -pow; i++) {
    result *= num;
    }
    long endTime = (System.nanoTime() - startTime);
    System.out.println("Spent " + endTime + " nanoseconds");
    System.out.println(num + " to the power of " + pow + " is " + 1/result);
    return 1/result;
    } else {
    for (int i = 1; i <= pow; i++) {
    result *= num;
    }
    long endTime = (System.nanoTime() - startTime);
    System.out.println("Spent " + endTime + " nanoseconds");
    System.out.println(num + " to the power of " + pow + " is " + result);
    return result;
    }
    }
    (3 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      Often, for small values of n, simple algorithms which have poor asymptotic complexity will outperform complex algorithms which have good asymptotic complexity. Asymptotic complexity only says how an algorithm will preform for large values of n.

      To properly analyze the running times you need to plot two curves (one for each algorithm) of the running time vs. n. The data also will likely be noisy , so instead of using the time for the function to run only once for that n, you should run it 100s or 1000s of times for that size (or possibly more if the data remains noisy), and average the time to get reliable values. Once you plot those curves you should see the iterative one is linear and the recursive one is logarithmic.

      Remember that in this case, for the n is the power.
      ( Note: For accuracy I will mention that the algorithms are not really O(n) and O(log n), as they are, in fact, pseudo-logarithmic and pseudo-linear, because n should be measured in their size in bits. However, this does not detract from the discussion above )

      Hope this makes sense
      (6 votes)
  • blobby green style avatar for user Anchit Virmani
    Why is x^n written as x^{n/2} . x^{n/2} ?
    I can also write it as x^{n/4} . x^{n/4}. x^{n/4}. x^{n/4} .
    How to do the complexity analysis ?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      You could only do x^{n/4} . x^{n/4}. x^{n/4}. x^{n/4} . if n is evenly divisible by 4.
      In the worst case n is odd.
      In either case you need at most 2 steps to reduce the size of n in half (1st step is the odd step and the 2nd is the even step).

      If the time require to complete the code in the odd step and the even step is constant, c1 we have the cost is c1 * log_2(n)

      So we could say O(log(n) ) for a pseudo asymptotic time.
      (the size of the problem n is supposed to be measured in bits, so it is actually O(n) for asymptotic time)
      (3 votes)
  • purple pi purple style avatar for user old account of @nimcord
    Doesn't x^n = x^(n - 1) * x work even for even numbers? Like
    2^4 = 2^3 * 2
    2^3 = 2^2 * 2
    2^2 = 2^1 * 2
    2^1 = 2^0 * 2
    2^0 = 1
    2^1 = 2
    2^2 = 4
    2^3 = 8
    2^4 = 16
    (1 vote)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      It works, but it isn't as efficient as calculating:
      x^n= x^(n/2) * x^(n/2)
      (O(n) vs O(log n) )
      We want our algorithms to produce the correct result and to be efficient.

      (For the sake of accuracy, I should mention that, in the above analysis I used pseudo asymptotic times for sake of clarity. For the real asymptotic time, the value of n, should be measured in bits required to represent the decimal value of n)
      (6 votes)
  • aqualine ultimate style avatar for user Tausif Iqbal
    i m stuck in 2nd step hear is my power function
    var power = function(x, n) {
    println("Computing " + x + " raised to power " + n + ".");
    // base case
    if(n===0){
    return 1;
    }
    // recursive case: n is negative
    // recursive case: n is odd
    if(isOdd(n)){
    return power(x,n-1);
    }
    // recursive case: n is even
    };
    (2 votes)
    Default Khan Academy avatar avatar for user
    • male robot hal style avatar for user Cameron
      Let's use an example.
      If we want to compute 2^5 then x=2 and n=5
      Since n is odd it's going to return us the result for
      2^4. But we didn't ask for the result for 2^4, we asked for the result to 2^5. So, there must be something else we need to do.

      Hint:
      2 * 2^4 = 2^5
      (3 votes)