If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## AP®︎/College Statistics

### Unit 9: Lesson 6

Sampling distributions for sample means

# Standard error of the mean

AP.STATS:
UNC‑3 (EU)
,
UNC‑3.Q (LO)
,
UNC‑3.Q.1 (EK)
Take a sample from a population, calculate the mean of that sample, put everything back, and do it over and over. How much do those sample means tend to vary from the "average" sample mean? This is what the standard error of the mean measures. Its longer name is the standard deviation of the sampling distribution of the sample mean. Created by Sal Khan.

## Want to join the conversation?

• Why is it called standard error of the mean even though it is the standard deviation of the sampling distribution of the sample mean?
• It is called an error because the standard deviation of the sampling distribution tells us how different a sample mean can be expected to be from the true mean. In other words, if we assume that the mean of our sample is always the true mean (even though it probably isn't) the standard deviation can tell us how likely we are to be wrong.
• Isn't CLT not just an extension of law of large numbers?
• This is correct but I think students can potentially misunderstand this concept.

A good way to think of the data sets that one uses for Standard Error and CLT is that the data sets we are using contain mean values of a population.
• Can you help me understand when we use (sigma)/(√n) rather than just using sigma? For example, when we are calculating z-scores, I am so confused as to when we say (x-bar - mu) / ((sigma)/(√n)) rather than (x-bar - mu) / (sigma). Can anyone help?
• The proof is based on this basic property of random variables:

For three random variables X, Y and Z,
(1) if Z=mX + mY = m(X + Y),
(2) then Var[Z] = m^2*Var[X] + m^2*Var[Y] = m^2(Var[X] + Var[Y]).

Similarly, by taking a sample of size n and calculating its mean, we are creating a new random variable x-bar that is the sum of many random variables. Think of x-bar as Z, 1/n as m, and each sample point x(i) as X, Y, etc.:

(1) x-bar = (x(1) + x(2) + ... + x(n))/n = 1/n * (x(1) + x(2) + ... + x(n))

What is the variance of this new random variable x-bar? Apply the property from above:

(2) Var[x-bar] = 1/n^2 * (Var[x(1)] + Var[x(2)] + ... + Var[x(n)])

When we take a sample of n data points, each individual point x(i) 'inherits' the population's variance: Var[x(i)] = σ^2. This means we can simplify:

(2) Var[x-bar] = 1/n^2 * (σ^2 + σ^2 + ... + σ^2) = n/n^2 * σ^2 = σ^2/n

This is just the variance for one sample. The sampling distribution is a combination of all these new random variables:

(1) Distribution = x-bar(1) + x-bar(2) + ... + x-bar(n)

So the sampling distribution has variance:

(2) Var[Distribution] = 1/n^2 * (σ^2 + σ^2 + ... + σ^2) = n/n^2 * σ^2 = σ^2/n

Finally, the sampling distribution's standard error is the square root of the sampling distribution's variance: σ/√n.
• Why is the standard error of the mean so much more sensible to the number of things I am taking averages from than the number of times we take the average? This last number doesn't even figure in the formula even though it did vary a bit in the experimental program (although seemingly not in a monotonic or convergent way!)
• The process of averaging a certain number of samples from one distribution actually creates a NEW distribution. It's that new distribution that you're sampling from. So when Sal averages 16 samples, he's creating one new distribution. When he averages 25 samples, he's creating a different new distribution.

But from that point forward, taking a lot of averages is just sampling lots of times from that new distribution. So taking 10,000 samples from the average of 16 just gives you an idea of what that new distribution looks like.

If you had the mathematical equation for the original distribution, you can derive the equation for the new distribution. That equation will have a fixed mean and standard deviation, and taking more and more samples simply gives you more confidence in your estimate of the mean and standard deviation.

As far as the variation in the experimental program, think if it this way. It's "possible" to take 100 samples that give you EXACTLY the mean and standard deviation of the new distribution. You just can't know for certain that you actually got the right answer. If you take a billion samples, you have a lot more confidence that you're really close, even if it isn't exact. Over time, it will converge, but that convergence isn't strict. It can fluctuate some while converging.
• I read some book before that a normal distribution have a kurtosis of 3, how come the java have a kurtosis close to zero if it is approximately normal distributed?
• I think the java application assumes 0 kurtosis for the normal curve. In other words, it subtracts 3 from the kurtosis achieved.
• if the original data set only had 10000 data points, and i selected a sample size of n=10000, calculated x_bar 100 times, and created a frequency distribution, wouldn't that just be a vertical bar? In that case the distribution doesn't look very normal at all.

It would have no tails and no peaks, so how can the distribution look increasingly normal as n->∞ ? Are we assuming an arbitrarily large original data set? Wouldn't it make more sense to say that the distribution looks increasingly normal with n as it initially increases, and then decreasingly normal with n as it approaches the size of the total original data set? Can we take partial derivatives and minimize for skew and kurtosis?

Also, if we can always get to an arbitrarily small variance by increasing n, aren't we losing the meaning of the data? Isn't it like blurring an image to the point where it's all just one color? At some point the image is no longer recognizable.

Do we just keep iterating through variances until we're happy? Is there a heuristic for preserving data integrity (in the non-image case where it's not as easy to identify whether something is representative of the original data)?
• If the population has N=10000, and the sample has n=10000, then there is no need to think about the sampling distribution. The sampling distribution is a way to describe how a statistic behaves from sample to sample, but if we sampled the whole population, then we can calculate the parameters directly.

More generally though, you seem to be getting at the idea of what happens as n->∞. Yes, it's true that the standard error of the mean gets smaller and smaller as n increases, but it won't get to the point of a distribution that's just a single vertical bar (we'd call it a degenerate distribution). That's too far out into n being large, it may be what "will eventually happen", but we can never actually get to that point.

And also, yes, we often assume that the population size is arbitrarily large relative to the sample size (quite often we assume that the population is infinite in size). In cases where the sample is large relative to the population (such as when N=10000 and n=9000) there are corrections that can be made to account for this fact.

> "Can we take partial derivatives and minimize for skew and kurtosis?"

I suppose it may be possible, but not really meaningful. Neither of those numbers are strictly positive, so minimizing with respect to them wouldn't help regulate us to a Normal distribution.

> "Also, if we can always get to an arbitrarily small variance by increasing n, aren't we losing the meaning of the data? Isn't it like blurring an image to the point where it's all just one color? At some point the image is no longer recognizable."

We can do this (within reason, sometimes it's just too expensive to collect a lot of observations). However, we aren't losing the meaning of the data. The sampling distribution isn't meant to reflect the original data in the least bit, it's meant to give us information on the population mean (because the sample mean will tend to be around the population mean). When the standard error gets very small, we can estimate the population mean with much more precision.
• In addition to varying the sample sie (n) shouldn't variation in the number of trials (say, 10 x n versus 10,000 x n) impact the degree to which the sampling distribution fits the normal curve?
• Yes, you are absolutely right. The central limit theorem states that in large samples (n), the sampling distribution of the sample mean (xbar) is approximately normal no matter how the population is distributed. But it ALSO dictates that as the number of samples increase, the distribution approaches normality :)
(1 vote)
• So, what are the assumptions for the CLT to be true? Of course, if the distribution is Cauchy, the CLT doesn't apply. Is all you need, a finite standard deviation? Don't the samples have to be independent as well? I suppose that may be the most difficult condition to meet in the real world? Do these same, rather outlandish, assumptions apply to the law of large numbers?
• This can get a bit tricky. For the "typical" CLT, we assume that the samples are all independent draws from a population with a constant mean, and a constant, finite variance.

There are generalizations to the CLT which relax these assumptions. I think the least restrictive one says something like - all samples must be from populations with finite mean and variance. They don't necessarily need to have the same mean or variance, and don't necessarily need to be independent (though I believe those thing affect the tate of convergence, so the "n>30" rule wouldn't work).

As to the law of large numbers, I believe that's more thinking in terms of estimating a parapeter. I believe that there is an assumption that the observations come from the same population (constant parameter values) and are independent. I don't think there is any need for the mean or variance to be finite, unless that's what the LLN is being applied to. It's just about probability of convergence, so as long as the parameter you're interested in is finite, other parameters shouldn't really matter.

This is all recollection off the top of my head, but I'm pretty confident.
• Given that the size of a sample is 30 ( n=30 ).

I know that the population mean ( "mu" ) is equal to the mean of the repeated sample means ( it means that we have collected so many samples and each sample has a sample size of 30).

For the population s.d. ( "sigma" ), it could be found if we divide the standard deviation of the repeated sample means by the square root of the sample size ( n=30 ), we therefore can estimate the population mean by using the confidence interval analysis.

My question is:
We often estimate the sigma ( the population s.d. ) by simply using the s ( the sample s.d.), which is the s.d. of just one sample ( with a size of 30), in the above formula.
However, this s is not the s.d. of the repeated sample mean.
What is the reasoning behind or is there something I got wrong?

Thank you so much : ]
• I think you've misunderstood something along the way. An interval estimate for the population mean, mu, is:
``xbar +/- T * s / sqrt(n)``

where s is the standard deviation of the original data, it is NOT the standard deviation of the repeated sample means (the standard error of the sample mean, or just the standard error, SE). The SE is the entire value of s / sqrt(n). Of these, s is the estimate of the population standard deviation, the SE is not an estimate of sigma (it's an estimate of sigma / sqrt(n) ).