Main content
Statistics and probability
Course: Statistics and probability > Unit 11
Lesson 3: Estimating a population mean- Introduction to t statistics
- Simulation showing value of t statistic
- Conditions for valid t intervals
- Reference: Conditions for inference on a mean
- Conditions for a t interval for a mean
- Example finding critical t value
- Finding the critical value t* for a desired confidence level
- Example constructing a t interval for a mean
- Calculating a t interval for a mean
- Confidence interval for a mean with paired data
- Making a t interval for paired data
- Interpreting a confidence interval for a mean
- Sample size for a given margin of error for a mean
- Sample size and margin of error in a confidence interval for a mean
© 2023 Khan AcademyTerms of usePrivacy PolicyCookie Notice
Reference: Conditions for inference on a mean
When we want to carry out inference (build a confidence interval or do a significance test) on a mean, the accuracy of our methods depends on a few conditions. Before doing the actual computations of the interval or test, it's important to check whether or not these conditions have been met. Otherwise the calculations and conclusions that follow may not be correct.
The conditions we need for inference on a mean are:
- Random: A random sample or randomized experiment should be used to obtain the data.
- Normal: The sampling distribution of
(the sample mean) needs to be approximately normal. This is true if our parent population is normal or if our sample is reasonably large . - Independent: Individual observations need to be independent. If sampling without replacement, our sample size shouldn't be more than
of the population.
Let's look at each of these conditions a little more in-depth.
The random condition
Random samples give us unbiased data from a population. When we don't use random selection, the resulting data usually has some form of bias, so using it to infer something about the population can be risky.
More specifically, sample means are unbiased estimators of their population mean. For example, suppose we have a bag of ping pong balls individually numbered from to , so the population mean of the bag is . We could take random samples of balls from the bag and calculate the mean from each sample. Some samples would have a mean higher than and some would be lower. But on average, the mean of each sample will equal . We write this property as , which holds true as long as we are taking random samples.
This won't necessarily happen if we use a non-random sample. Biased samples can lead to inaccurate results, so they shouldn't be used to create confidence intervals or carry out significance tests.
The normal condition
The sampling distribution of (a sample mean) is approximately normal in a few different cases. The shape of the sampling distribution of mostly depends on the shape of the parent population and the sample size .
Case 1: Parent population is normally distributed
If the parent population is normally distributed, then the sampling distribution of is approximately normal regardless of sample size. So if we know that the parent population is normally distributed, we pass this condition even if the sample size is small. In practice, however, we usually don't know if the parent population is normally distributed.
Case 2: Not normal or unknown parent population; sample size is large ( )
The sampling distribution of is approximately normal as long as the sample size is reasonably large. Because of the central limit theorem, when , we can treat the sampling distribution of as approximately normal regardless of the shape of the parent population.
There are a few rare cases where the parent population has such an unusual shape that the sampling distribution of the sample mean isn't quite normal for sample sizes near . These cases are rare, so in practice, we are usually safe to assume approximately normality in the sampling distribution when .
Case 3: Not normal or unknown parent population; sample size is small ( )
As long as the parent population doesn't have outliers or strong skew, even smaller samples will produce a sampling distribution of that is approximately normal. In practice, we can't usually see the shape of the parent population, but we can try to infer shape based on the distribution of data in the sample. If the data in the sample shows skew or outliers, we should doubt that the parent is approximately normal, and so the sampling distribution of may not be normal either. But if the sample data are roughly symmetric and don't show outliers or strong skew, we can assume that the sampling distribution of will be approximately normal.
The big idea is that we need to graph our sample data when and then make a decision about the normal condition based on the appearance of the sample data.
The independence condition
To use the formula for standard deviation of , we need individual observations to be independent. In an experiment, good design usually takes care of independence between subjects (control, different treatments, randomization).
In an observational study that involves sampling without replacement, individual observations aren't technically independent since removing each observation changes the population. However the condition says that if we sample or less of the population, we can treat individual observations as independent since removing each observation doesn't change the population all that much as we sample. For instance, if our sample size is , there should be at least members in the population for the sample to meet the independence condition.
Assuming independence between observations allows us to use this formula for standard deviation of when we're making confidence intervals or doing significance tests:
We usually don't know the population standard deviation , so we substitute the sample standard deviation as an estimate for . When we do this, we call it the standard error of to distinguish it from the standard deviation.
So our formula for standard error of is:
Summary
If all three of these conditions are met, then we can we feel good about using distributions to make a confidence interval or do a significance test. Satisfying these conditions makes our calculations accurate and conclusions reliable.
The random condition is perhaps the most important. If we break the random condition, there is probably bias in the data. The only reliable way to correct for a biased sample is to recollect the data in an unbiased way.
The other two conditions are important, but if we don't meet the normal or independence conditions, we may not need to start over. For example, there is a way to correct for the lack of independence when we sample more than of a population, but it's beyond the scope of what we're learning right now.
The main idea is that it's important to verify certain conditions are met before we make these confidence intervals or do these significance tests.
Want to join the conversation?
- What happened to the normal condition np ≥ 10 and n(1-p) ≥ 10(16 votes)
- it is for sampling distribution of sample proportion(42 votes)
- Shouldn't the standard error of sample be just sample standard deviation instead of (sample standard deviation)/(sqrt(n)) ??(8 votes)
- When you take a sample of a population, the sd should be sd/sqrt(n). What stays the same is the mean. The mean is the same both for the population and the sample. I think you're confusing the two.(3 votes)
- It's said: n should be >= 30, for calculating t-intervals (using t-statistics). But in another video (https://www.khanacademy.org/math/statistics-probability/significance-tests-one-sample/more-significance-testing-videos/v/z-statistics-vs-t-statistics) Sal is saying that t-statistics should be used instead of z-statistics only IF n < 30.(4 votes)
- z-statistics will give a narrower confidence interval than t-statistics, but the larger 𝑛 is the less that difference will be, and for 𝑛 ≥ 30, the difference can be considered negligible.(6 votes)
- I'm very curious about the method for correcting the independence factor of samples when n> 10%N. It was mentioned as "beyond the scope", does anyone have references for that? Maybe Stat trek?(3 votes)
- If the population is known to likely not be normal and a sample of n<30, then does the sample have to be transformed to normal to make an inference on CI? E.g. if you want to determine a CI on the number of customers that arrive at a drive through window during lunch - I believe this would be a Poisson counting process, therefore not normal. So wouldn't this be skewed to the right since the number of customers is bounded to >=0 and likely not symmetric. Basically, how can you correct a sample to make an inference on CI mean if dealing with Case #3. Thanks!(1 vote)