Current time:0:00Total duration:9:44

0 energy points

Studying for a test? Prepare with these 5 lessons on Summarizing quantitative data.

See 5 lessons

# Review and intuition why we divide by n-1 for the unbiased sample variance

Video transcript

What I want to do in
this video is review much of what we've already talked
about and then hopefully build some of the intuition on
why we divide by n minus 1 if we want to have an unbiased
estimate of the population variance when we're calculating
the sample variance. So let's think
about a population. So let's say this is the
population right over here. And it is of size
capital N. And we also have a sample of
that population, so a sample of that population. And in its size, we have
lowercase n data points. So let's think about all of
the parameters and statistics that we know about so far. So the first is the idea
of the mean, of the mean. So if we're trying to calculate
the mean for the population, is that going to be a
parameter or a statistic? Well, when we're trying to
calculate it on the population, we are calculating a parameter. We are calculating a parameter. So let me write this down. So this is going to be--
so for the population we are calculating a parameter. It is a parameter. And when we calculate, when we
attempt to calculate something for a sample we would call
that a statistic-- statistic. So how do we think about
the mean for a population? Well, first of all, we denote
it with the Greek letter mu. And we essentially take every
data point in our population. So we take the sum
of every data point. So we start at the
first data point and we go all the way to
the capital Nth data point. So every data point we add up. So this is the i-th
data point, so x sub 1 plus x sub 2 all the
way to x sub capital N. And then we divide by the total
number of data points we have. Well, how do we calculate
the sample mean? Well, the sample mean--
we do a very similar thing with the sample. And we denote it with
a x with a bar over it. And that's going to be taking
every data point in the sample, so going up to a lower
case n, adding them up --so these are the sum of all
the data points in our sample-- and then dividing by
the number of data points that we actually had. Now, the other thing
that we're trying to calculate for the population,
which was a parameter, and then we'll also try to
calculate it for the sample and estimate it
for the population, was the variance, which was
a measure of how dispersed or how much of the data
points vary from the mean. So let's write variance
right over here. And how do we denote
any calculate variance for a population? Well, for population, we'd
say that the variance --we use a Greek letter sigma
squared-- is equal to-- and you can view it as the
mean of the squared distances from the population mean. But what we do is we
take, for each data point, so i equal 1 all
the way to n, we take that data point, subtract
from it the population mean. So if you want to
calculate this, you'd want to figure this out. Well, that's one way to do it. We'll see there's
other ways to do it, where you can calculate
them at the same time. But the easiest or
the most intuitive is to calculate this first,
then for each of the data points take the data point and
subtract it from that, subtract the mean
from that, square it, and then divide by the total
number of data points you have. Now, we get to the interesting
part-- sample variance. There's are several ways-- where
when people talk about sample variance, there's several
tools in their toolkits or there's several
ways to calculate it. One way is the biased
sample variance, the non unbiased estimator
of the population variance. And that's denoted,
usually denoted, by s with a subscript n. And what is the biased
estimator, how we calculate it? Well, we would calculate it very
similar to how we calculated the variance right over here. But what we would do it for
our sample, not our population. So for every data point in our
sample --so we have n of them-- we take that data point. And from it, we subtract
our sample mean. We subtract our sample
mean, square it, and then divide by the number
of data points that we have. But we already talked
about it in the last video. How would we find-- what is
our best unbiased estimate of the population variance? This is usually what
we're trying to get at. We're trying to find an unbiased
estimate of the population variance. Well, in the last video,
we talked about that, if we want to have
an unbiased estimate --and here, in
this video, I want to give you a sense
of the intuition why. We would take the sum. So we're going to go through
every data point in our sample. We're going to take
that data point, subtract from it the
sample mean, square that. But instead of dividing by
n, we divide by n minus 1. We're dividing by
a smaller number. We're dividing by
a smaller number. And when you divide
by a smaller number, you're going to
get a larger value. So this is going to be larger. This is going to be smaller. And this one, we refer
to the unbiased estimate. And this one, we refer
to the biased estimate. If people just
write this, they're talking about the
sample variance. It's a good idea to
clarify which one they're talking about. But if you had to guess
and people give you no further information,
they're probably talking about the unbiased
estimate of the variance. So you'd probably
divide by n minus 1. But let's think about why
this estimate would be biased and why we might want to have
an estimate like that is larger. And then maybe in the future,
we could have a computer program or something that really
makes us feel better, that dividing by
n minus 1 gives us a better estimate of the
true population variance. So let's imagine all the
data in a population. And I'm just going to plot
them on number a line. So this is my number line. This is my number line. And let me plot all the data
points in my population. So this is some data. This is some data. Here's some data. And here is some data here. And I can just do as
many points as I want. So these are just points
on the number line. Now, let's say I take
a sample of this. So this is my entire population. So let's see how many. I have 1 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 13, 14. So in this case, what
would be my big N? My big N would be 14. Big N would be 14. Now, let's say I take a sample,
a lowercase n of-- let's say my sample size is 3. I could take-- well, before
I even think about that, let's think about roughly where
the mean of this population would sit. So the way I drew
it --and I'm not going to calculate
exactly-- it looks like the mean might sit some
place roughly right over here. So the mean, the
true population mean, the parameter's going
to sit right over here. Now, let's think about what
happens when we sample. And I'm going to do just a
very small sample size just to give us the
intuition, but this is true of any sample size. So let's say we have
sample size of 3. So there is some
possibility, when we take our sample size of 3,
that we happen to sample it in a way that our sample mean is
pretty close to our population mean. So for example, if we sampled
to that point, that point, and that point, I could imagine
in our sample mean might actually said pretty
close, pretty close to our population mean. But there's a
distinct possibility, there's a distinct
possibility, that maybe when I take a sample, I
sample that and that. And the key idea here is
when you take a sample, your sample mean is always
going to sit within your sample. And so there is a possibility
that when you take your sample, your mean could even be
outside of the sample. And so in this
situation-- and this is just to give
you an intuition. So here, your
sample mean is going to be sitting
someplace in there. And so if you were to just
calculate the distance from each of this points to the
sample mean --so this distance, that distance,
and you square it, and you were to divide by
the number of data points you have-- this is
going to be a much lower estimate than the true
variance the true variance, from the actual population mean,
where these things are much, much, much further. Now, you're always not going to
have the true population mean outside of your sample. But it's possible that you do. So in general, when you
just take your points, find the squared distance
to your sample mean, which is always going to sit
inside of your data even though the true population
mean could be outside of it, or it could be at
one end of your data, however, you might
want to think about it, you are likely to
be underestimating, you're likely to
be underestimating the true population variance. So this right over here is an
underestimate-- underestimate. And it does turn out that
if you just-- instead of dividing by n, you
divide by n minus 1, you'll get a slightly
larger sample variance. And this is an
unbiased estimate. In the next video --and I might
not to get to it immediately-- I would like to generate some
type of a computer program that is more convincing that
this is a better estimate, this is a better estimate
of the population variance than this is.