Main content

## AP®︎/College Calculus AB

### Unit 6: Lesson 3

Riemann sums, summation notation, and definite integral notation- Summation notation
- Summation notation
- Worked examples: Summation notation
- Summation notation
- Riemann sums in summation notation
- Riemann sums in summation notation
- Worked example: Riemann sums in summation notation
- Riemann sums in summation notation
- Definite integral as the limit of a Riemann sum
- Definite integral as the limit of a Riemann sum
- Worked example: Rewriting definite integral as limit of Riemann sum
- Worked example: Rewriting limit of Riemann sum as definite integral
- Definite integral as the limit of a Riemann sum

© 2023 Khan AcademyTerms of usePrivacy PolicyCookie Notice

# Definite integral as the limit of a Riemann sum

AP.CALC:

LIM‑5 (EU)

, LIM‑5.B (LO)

, LIM‑5.B.1 (EK)

, LIM‑5.B.2 (EK)

, LIM‑5.C (LO)

, LIM‑5.C.1 (EK)

, LIM‑5.C.2 (EK)

Definite integrals represent the exact area under a given curve, and Riemann sums are used to approximate those areas. However, if we take Riemann sums with infinite rectangles of infinitely small width (using limits), we get the exact area, i.e. the definite integral! Created by Sal Khan.

## Want to join the conversation?

- Has definite integrals something to do with derivatives? Or its completely different thing?(30 votes)
- No no no. Think of definite integration as the REAL integration (use the term "integral" to mean this definite integral). Integration existed LONG LONG LONG LONG before differentiation was even invented. It is a different subject. Archimedes and the greeks would integrate curves, shapes, etc. etc. to find areas.

Differentiation is a different thing. And the opposite of differentiation is antidifferentiation (which also has the UNFORTUNATE name of indefinite integral, but please use "antidifferentiation" to refer to opposite of differentiation).

Now these two are separate fields. Integration is finding areas under curves, between curves, inside shapes, etc. etc. etc.

Differentiation is finding rate of change of functions (i.e., slopes). And antidifferentiation is the reverse (that is, we have a function f, and we try to figure which function's rate of change f represents).

Now fast forward to Newton, Leibniz time. They already knew a lot about integration and differentiation as separate subjects. But these two made a connection.

Integration (definite) OF CONTINUOUS function is related to Antidifferentiation of that function

So the connection is a big theorem, but it is for continuous functions. Integration(area finding) can be done on discontinuous functions (i.e. jump discontinuities) but the theorem doesn't apply to these. It only connects integration of continuous functions to antidifferentiation. Learn the two subjects like they are totally different. Then later connect integration of cont. functions with differential calculus.(8 votes)

- Why wouldn't people use circles to approximate areas under curves?(13 votes)
- Technically you could, assuming that you'd be using circles small enough to meet your requirements for accuracy. Think about this in three dimensions- you can fill a volume with spheres or balls and then summing up the volume of the spheres. Circles don't fit together very neatly, and this is actually pretty mathematically hard to figure out if you wanted to be rigorous about it, but give it a try both ways. In any case, if you
**can**use circles to approximate the area under a curve you'll pretty much know this subject well enough to do anything.(35 votes)

- Let's say you have a function y=x^2 and you want to find the area under the curve along the interval -3 to 3 and you're delta x was 6 (or any finite number). Could you get not just an approximation, but the exact area under the curve if you took the left side approximation and added it to the right side approximation and then divided that sum by 2? In other words, would the amount that you overestimated the area under the curve be exactly the same amount that you underestimated the area under the curve? Would the value of the overestimation (in either the left-side approximation, or right-side approximation) be equal to the value of the underestimation? My guess is that if the function has a vertical line of symmetry AND the interval along the x-axis extends an equal distance away from the line of symmetry in both directions (in our example -3 to 3 with the line of symmetry being the y-axis) that the amount of overestimation and underestimation would be the same and when you took the average of the two amounts you would be left with the exact value of the area under the curve. Thoughts?(6 votes)
- The quick answer is no, that wouldn't work. :-)

You are correct that it would work for a**very limited**number of cases on a**very limited**number of functions, but it wouldn't actually work for the example you give, because the function is curved. That means the average of the right approximation and the left approximation is NOT the same as the actual value of the area -- we can see this with the function x^2 by averaging the value at 0 (which is 0) and the value at 1 (which is 1). The average of 1/2 is not the same as the area under the function, which is 1/3.

Because the number of cases this would work for is so limited, it's easier just to use integration for everything -- and this approach has the advantage that it always works.(8 votes)

- Apart from the rectangular bars, there is some area left under the curve. What about that area?(5 votes)
- If you use finitely many Riemann rectangles, you have an approximation to the area under the curve. It is only through taking a limit as the number of rectangles increases ad infinitum (and their width shrinks to zero) that you will obtain the exact area under the curve.(8 votes)

- Can we replace n appraoch infinity by delta x approch zero? If not then why so?(5 votes)
- The real Reimann integral does it this way. It allows any for number of intervals , and each interval can be any size. Then we take the limit of the min_interval_size to 0.(3 votes)

- Wouldn't an infinitely small
`Δx`

be notated with`δx`

or just`δ`

?(3 votes)- The Leibniz notation is dx.

Why? - years on convention.

The x must remain since we need to know with respect to what variable the function is being integrated. This is more obvious once you get into multi variable calculus.(8 votes)

- What is the correct way of saying that delta x becomes extremely small (approaches 0) in Calculus context - delta x becomes
small or delta x becomes**infinitely**small? I know it's a trivial thing, but I wish to use the correct terminology.**infinitesimally**(2 votes)- It's neither. We allow Δx to become
*arbitrarily*small. That is, we can take Δx as close to 0 as you wish. The term 'infinitesimal' can help guide intuition, but it's a defining property of the real numbers that nothing is infinitesimal.(7 votes)

- How did Bernard Riemann get all the properties of the integral from the Riemann sum? For example how did he know the definite integral from a to b of f(x) is F(b)-F(a). from the sum because it is infinite. Or if you take the derivative of the indefinite integral you get the integrand (how do you take the derivative of a limit and sigma notation)?(4 votes)
- Those are some very good questions. Any good book on calculus or one on elementary real analysis treating the Riemann integral should answer your questions (the details are too lengthy for me to type up here).

Let me point out two subtle facts. Firstly, a function may possess an anti-derivatve, yet fail to be (Riemann) integrable. This fact is often overlooked, especially at the elementary level. What is more, even if`ƒ`

is an integrable function on`[a, b]`

, and we define the function`F`

on`[a,`

`b]`

by`F(x) = ∫ [a, x] ƒ(t) dt,`

the integral going from`a`

to`x`

,`F`

need not be differentiable. In other words, it need not be the case that`F'(x) = ƒ(x)`

for all`x`

- one can show that`F`

is differentiable at`x`

if and only if`ƒ`

is continuous at`x`

. Moreover, one can show that the set of points in`[a, b]`

at which`ƒ`

is discontinuous has*measure zero*, so`F`

is differentiable*almost everywhere*(in a technical sense).(2 votes)

- is it possible to use Simpson's rule even when n is not an even number?(4 votes)
- "any" excludes unbounded functions, or functions with "too many" discontinuities and other such degeneracies (but for now, assume you can definite integrate any function that doesn't blow up in size or anything like that)(1 vote)

- I still just don't get how Riemann went from having the sigma notation to translating it to a definite integral? How did he know that the process of definitely integrating a function (integrating using the formulas and then replacing the bounds in the integral and subtracting them) would lead to the same answer? This question is really getting on my nerves because nowhere online have I found an explanation.(3 votes)
- Hi Christina

You have two questions here, one of notation, how did we go from Σ to ∫, and the other of how did definite integral come about.**Notation**

If you have made it to integral calculus, you must have come through algebra and differential calculus, and if so, you have already seen a change in notation, so I’ll start there.

When you were in algebra you calculated the slope of a line as Δy/Δx. When you got to differential calculus the problem was not the slope of a line, but the slope of a curve at some point. The concept was developed by using a secant line, or average slope. If we wanted to estimate the slope at a point c, we drew a line between two points on either side of c that were – Δy/Δx and Δy/Δx from c. The we asked, what happens as Δy/Δx gets smaller and smaller, that is, the endpoints of the secant line get closer and closer to c. We decided that once the Δ of the Δy/Δx was infinitesimally small, we would change the notation from Δy/Δx to dy/dx, to remind us that we are dealing with infinitesimals (also called differentials). So for the slope of a straight line or secant line we used Δy/Δx but for the slope of a tangent line we use dy/dx.

It is pretty much the same deal on how we went from Σ to ∫. The Riemann sum is a sum of sections whose width is Δx, so we have, in general, Σf(x)Δx. As we make Δx smaller and smaller, until it is infinitesimal, we again change the notation from Δx to dx AND we change the notation of Σ to ∫, that is Σf(x)Δx to ∫f(x)dx. It really is just sort of a visual reminder that we are dealing with infinitesimal changes in x, that is, dx. In case you didn’t know, the integral symbol ∫ is just an elongated S, which stand for sum, so yes, the Riemann sum is the same as the Riemann integral, the only difference is the Δx is infinitesimally small.**The Definite Integral**

As far as how the definite integral came about, that happened way before Riemann. The 2 theorems are called the Fundamental Theorems of Calculus. They are talked about here on Khan. This Wikipedia page has proofs of them that do not require math skills above what you should have by now - it will clearly show how the definite integral was "discovered". Here are a few links to get you going:

https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus

http://mathforum.org/library/drmath/view/53374.html

I sure hope that helped. If not send me a comment on what you need clarified.

Stefen(2 votes)

## Video transcript

We've done several
videos already where we're approximating
the area under a curve by breaking up that
area into rectangles and then finding the sum of
the areas of those rectangles as an approximation. And this was actually
the first example that we looked at where
each of the rectangles had an equal width. So we equally
partitioned the interval between our two boundaries
between a and b. And the height of the
rectangle was the function evaluated at the left
endpoint of each rectangle. And we wanted to generalize it
and write it in sigma notation. It looked something like this. And this was one case. Later on, we looked
at a situation where you define the
height by the function value at the right endpoint
or at the midpoint. And then we even
constructed trapezoids. And these are all particular
instances of Riemann sums. So this right over
here is a Riemann sum. And when people talk
about Riemann sums, they're talking about
the more general notion. You don't have to
just do it this way. You could use trapezoids. You don't even have to have
equally-spaced partitions. I used equally-spaced partitions
because it made things a little bit
conceptually simpler. And this right here is
a picture of the person that Riemann sums
was named after. This is Bernhard Riemann. And he made many
contributions to mathematics. But what he is most
known for, at least if you're taking a
first-year calculus course, is the Riemann sum. And how this is used to
define the Riemann integral. Both Newton and
Leibniz had come up with the idea of
the integral when they had formulated calculus,
but the Riemann integral is kind of the most
mainstream formal, or I would say
rigorous, definition of what an integral is. So as you could imagine, this is
one instance of a Riemann sum. We have n right over here. The larger n is, the better an
approximation it's going to be. So his definition of
an integral, which is the actual area
under the curve, or his definition of a
definite integral, which is the actual area under
a curve between a and b is to take this Riemann sum,
it doesn't have to be this one, take any Riemann sum, and
take the limit as n approaches infinity. So just to be clear,
what's happening when n approaches infinity? Let me draw another
diagram here. So let's say that's my y-axis. This is my x-axis. This is my function. As n approaches
infinity-- so this is a, this is b-- you're just going
to have a ton of rectangles. You're just going to get a
ton of rectangles over there. And there are going to
become better and better approximations for
the actual area. And the actual area
under the curve is denoted by the integral
from a to b of f of x times dx. And you see where
this is coming from or how these
notations are close. Or at least in my brain,
how they're connected. Delta x was the width for
each of these sections. This right here is delta x. So that is a delta x. This is another delta x. This is another delta x. A reasonable way to
conceptualize what dx is, or what a differential
is, is what delta x approaches, if it
becomes infinitely small. So you can conceptualize
this, and it's not a very rigorous way
of thinking about it, is an infinitely small-- but
not 0-- infinitely small delta x, is one way that you
can conceptualize this. So once again, as you
have your function times a little small
change in delta x. And you are summing,
although you're summing an infinite number
of these things, from a to b. So I'm going to
leave you there just so that you see the connection. You know the name
for these things. And once again, this
one over here, this isn't the only Riemann sum. In fact, this is often
called the left Riemann sum if you're using it
with rectangles. You can do a right Riemann sum. You could use the midpoint. You could use a trapezoid. But if you take the limit of
any of those Riemann sums, as n approaches
infinity, then that you get as a Riemann
definition of the integral. Now so far, we haven't
talked about how to actually evaluate this thing. This is just a
definition right now. And for that we will
do in future videos.