If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Sample size for a given margin of error for a mean

Calculate the approximate sample size required to obtain a desired margin of error in a confidence interval for a mean.

## Want to join the conversation?

• I tried the t* times SE/square-root(n) with all the answers and only answer C (approximately 8.69) and D (approximately 6.82) produced a margin of error no more than 10. Using a sample size of 5 and 7 would result of a margin of error of approximately 14.30 and 11.02 respectively. As Sal Khan said in some previous videos, if we don't know the Standard Deviation of the true population and instead using Standard Deviation from a sample population (aka Standard Error), to calculate margin of error with the formula z* times sigma/square-root(n), the result will not reach the desired confidence level and often are underestimating the level. Here he did the exact same thing he said before we shouldn't be doing...And the results I calculated also showed that the answers should be C (the smallest sample size to get <= 10 margin of error) instead of B...I am lost now...anyone can help?
• A correction of terminologies;

sample SD is just sample SD

SE of the mean is equal to (sample SD)/sqrt(n)

The thing is that here, we don't have a sample standard deviation. But we do have a pilot study that tells us that the population SD is 15km, and so hopefully, we can trust that. And if we use population SD, we use z*, not t*.

It is still possible that the n Sal calculated will not give a ME that is less than 10. Maybe the pilot study was completely wrong, or the sample SD that we get once we actually start the experiment ends up grossly overestimating the population SD.
• Why did khan choose to use the Z-score instead of the T-table for this problem as they want to take a sample.
• You need to know the degrees of freedom (df), which requires you to know the sample size, to use a t-table. Since the sample size is what we are trying to find out, we cannot use a t-table.
• I share the other commentators' sentiment. If we use the sample size n=7 and apply the appropriate t critical value for df=6, we'll see that the margin of error is about 11 which is 10% higher than the target 10. It is obvious that using z score for such a small sample is not wise.

Wouldn't it be more prudent to calculate the ME for each of the presented values (only 4 of them, after all) and choose one which actually satisfies the condition? At least as far as this example goes, the solution provided in the video is unsatisfactory.
• My TI-84 Plus calculator doesn't have the tail function so I get a z-value of 1.28 as a result. Furthermore, I checked the z-table and 90% is 1.28. How come you got 1.64 for it?
• The probability of being inside the interval is 90%, but the z-table takes the probability of being inside or below the interval,
which is 95% ⇒ 𝑧 ≈ 1.64
if continue with t then min n = 9

more details:

using the inequality expression that
n^0.5 >/= 1.5 t*
we need the min n relative to max t* and also satisfies the above inequality.
looking at t table, will find min n=9

for n=7 , 7^0.5=2.65 while 1.5*1.943= 2.9, not satisfying the inequality.

for n=8, 8^0.5=2.828 while 1.5(1.895)= 2.842 not satisfying

BUT @ n=9
9^0.5 = 3 and 1.5*1.860=2.79 satisfying the inequality
• Can't we get an ever better estimate if we take our result of n = 7 that we initially got and then calculate a t critical value based on that. So, we know that n = 7 and therefore our df = 6. By plugging into invT in the TI-84 calculator we get that the t critical value happens to be approximately 1.943, and by solving the inequality 1.943 * 15 / sqrt(n) <= 10 to get n >= 8.496, leading to answer choice C
• 1.943 is the T-critical value for cumulative probability of 95%, specifically for when n = 7. The shape of T Distribution changes according to the size of n. So 1.943 is NOT the T-critical value for cumulative probability of 95% for any other amount of 'n', say 10.

You said solve 1.943 * 15 / sqrt(n) <= 10 for 'n' to get best estimate. Problem is, 1.943 is the T-critical value for cumulative probability of 95% for 'n=7', and it NOT the T-critical value for cumulative probability of 95% for 'n= 8.496'; if you use the Ti-84 tcdf function, you will see that 1.943 T-critical value for when 'n=8.496' (with df of 7.496) is 0.955, which is a cumulative probability of 95.5%. {Sample Mean +/- 1.943 * 15 /8.496} is an Confidence Interval of 93.25% while {Sample Mean +/- 1.943 * 15 /8.496} is an Confidence Interval of 95%, which is what we want.
• What if we don't have the population variance, and only have the sample variance, and we can't use the t-interval with the sample variance to find 'n', what to do?
• You can test each value of n from the available options. This would be a lot faster than the alternative method, which is expanding t* to it's algebraic form and then solve the inequality.