If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Conditions for valid t intervals

Examples showing how to determine if the conditions have been met for making a t interval to estimate a mean.

## Want to join the conversation?

• Why can't we use the '# of Success & #of Failure both >/= 10' test to test for Normality? • @ previously in the videos of central limit theorem it was said that as your sample size approach infinity the sample distribution of the sample mean approaches normal. and said it is not that extent "approach infinity" 10 or 20 samples are enough to approach normal dist of sample mean.
NOW WHERE THIS 30 SAMPLE RULE CAME FROM? • If you want the sample mean distribution approaches normal with 10 or 20 samples then the size of each samples must >= 30 (30 is the size of a sample not 30 samples).
If your sample's size is < 30 and if the distribution of the population is not normal (in this video the population distribution is right skew) then then sample mean distribution won't approach normal even the number of samples you make approach infinity.
• Regarding the second assumption (normality), isn't it rather law of large number than central limit theorem that would ensure that? After all, we only have one random sample. • @ said that we use t* and when we don't have access to the sample standard deviation of the sample distribution we use the sample standard deviation.
didn't he mean to say that when we don't have access to the (( true population standard deviation .............. )) ? as sample distribution is a part of building our estimation and we use the known parameters "if any, like pop S.D" but often we don't know such parameter then we use the sample S.D
is that true? we use sample S.D instead of the missing parameter not instead of the missing s.D of sample distribution. • I am having some trouble understanding your question exactly. You seem to be asking why we don't know the population parameters (mean and standard deviation). I will try to explain what I think.
In general we are using a sample to estimate the parameters of a population because it is impractical to know something about every item in a true (often large) population. For example, it is too expensive for a child seat company to call all parents about their opinions on a new carseat design. So we take a sample, a subset of the total population.
The standard deviation of the sampling distribution is the standard error, which is approximated by the standard deviation of our sample (it cannot be by the standard deviation of the population because we do not know that parameter) divided by the square root of our sample size. We don't know the 'true' standard deviation of a sampling distribution.
(1 vote)
• I asked same question for later video about "z - statistics vs t - statistics"

At , the one of conditions for t - testing is met when sample size, n, is greater or equal to 30.

However, the later video (which I mentioned on first line of sentence) says that we should use T- statistics when we have less than 30 for our sample size.

Could you please clarify this for me?

Thanks! • Couldn't you just say that the sampling distribution is approximately normal if:

(1) sample size >= 30
(2) the population distribution is roughly symmetric

The population distribution being normal itself would satisfy the second condition, so it seems like it doesn't actually add anything. • I read from other sources that we use t statistics when n<= 30. Why Sal is saying the opposite here? • It is confusing to me that the word "sample" is used for both a single data point within a trial, and for the complete trial taken as a whole. For example, if we took a bunch of 10 ml samples of pond water, each of these "samples" would have a specific measurement, say, the number of microorganisms per ml. We might also speak of a "sample" of 50 voters, for example, taken from a population of, let's say, 1000. This ambiguity of language leads to unnecessary confusion IMHO. • At
What is the difference between sample standard deviation and standard deviation of sampling distribution?
(1 vote) • The sample standard deviation is the standard deviation from one sample (e.g. I sampled 100 voters in my home state of Queensland, and asked if they support Australia becoming a republic). This is used to approximate the true population (all Queenslander eligible to vote) standard deviation (measure of spread). The standard deviation of the sampling distribution does not approximate the population standard deviation. It is a measure of the spread of a seperate thing called the sampling distribution (e.g. I sampled 100 as above and reported the proportion as a data point, then I did a fresh 100 sample and reported that, then another, and another, ad infinitum, until I have a distribution of the values from lots of samples). 