Main content

### Course: AP®︎/College Statistics > Unit 10

Lesson 2: Confidence intervals for proportions- Conditions for valid confidence intervals for a proportion
- Conditions for confidence interval for a proportion worked examples
- Reference: Conditions for inference on a proportion
- Conditions for a z interval for a proportion
- Critical value (z*) for a given confidence level
- Finding the critical value z* for a desired confidence level
- Example constructing and interpreting a confidence interval for p
- Calculating a z interval for a proportion
- Interpreting a z interval for a proportion
- Determining sample size based on confidence and margin of error
- Sample size and margin of error in a z interval for p

© 2024 Khan AcademyTerms of usePrivacy PolicyCookie Notice

# Reference: Conditions for inference on a proportion

When we want to carry out inferences on one proportion (build a confidence interval or do a significance test), the accuracy of our methods depend on a few conditions. Before doing the actual computations of the interval or test, it's important to check whether or not these conditions have been met, otherwise the calculations and conclusions that follow aren't actually valid.

The conditions we need for inference on one proportion are:

**Random**: The data needs to come from a random sample or randomized experiment.**Normal**: The sampling distribution of needs to be approximately normal — needs at least$\hat{p}$ expected successes and$10$ expected failures.$10$ **Independent**: Individual observations need to be independent. If sampling without replacement, our sample size shouldn't be more than of the population.$10\mathrm{\%}$

Let's look at each of these conditions a little more in-depth.

## The random condition

Random samples give us unbiased data from a population. When samples aren't randomly selected, the data usually has some form of bias, so using data that wasn't randomly selected to make inferences about its population can be risky.

More specifically, sample proportions are unbiased estimators of their population proportion. For example, if we have a bag of candy where $50\mathrm{\%}$ of the candies are orange and we take random samples from the bag, some will have more than $50\mathrm{\%}$ orange and some will have less. But on average, the proportion of orange candies in each sample will equal $50\mathrm{\%}$ . We write this property as ${\mu}_{\hat{p}}=p$ , which holds true as long as our sample is random.

This won't necessarily happen if our sample isn't randomly selected though. Biased samples lead to inaccurate results, so they shouldn't be used to create confidence intervals or carry out significance tests.

## The normal condition

The sampling distribution of $\hat{p}$ is approximately normal as long as the expected number of successes and failures are both at least $10$ . This happens when our sample size $n$ is reasonably large. The proof of this is beyond the scope of AP statistics, but our tutorial on sampling distributions can provide some intuition and verification that this condition indeed works.

So we need:

If we are building a confidence interval, we don't have a value of $p$ to plug in, so we instead count the observed number of successes and failures in the sample data to make sure they are both at least $10$ . If we are doing a significance test, we use our sample size $n$ and the hypothesized value of $p$ to calculate our expected numbers of successes and failures.

## The independence condition

To use the formula for standard deviation of $\hat{p}$ , we need individual observations to be independent. When we are sampling without replacement, individual observations aren't technically independent since removing each item changes the population.

But the $10\mathrm{\%}$ condition says that if we sample $10\mathrm{\%}$ or less of the population, we can treat individual observations as independent since removing each observation doesn't significantly change the population as we sample. For instance, if our sample size is $n=150$ , there should be at least $N=1500$ members in the population.

This allows us to use the formula for standard deviation of $\hat{p}$ :

In a significance test, we use the sample size $n$ and the hypothesized value of $p$ .

If we are building a confidence interval for $p$ , we don't actually know what $p$ is, so we substitute $\hat{p}$ as an estimate for $p$ . When we do this, we call it the $\hat{p}$ to distinguish it from the standard deviation.

**standard error**ofSo our formula for standard error of $\hat{p}$ is

## Want to join the conversation?

- Why don't we use the sample standard deviation for the standard error?

At the end, it says the formula for standard error ≈ sqrt(p-hat*(1-p-hat)/n). But since p-hat is a sample, why don't we use the sample standard deviation with the n-1 correction to estimate the true standard deviation of the sample distribution? Shouldn't it be sqrt(p-hat*(1-p-hat)/n-1)?(19 votes)- The appearance of n in the expression for the standard deviation for p-hat is not due to sampling, but due to the number of trials n for the Binomial random variable X~B(n,p), where n is the number of trials and p is the probability of a success in any given trial.

Unfortunately, in this context, the letter p is used for both the probability and the proportion.

So, the random variable p-hat is actually a scaling, by 1/n, of the Binomial random variable X~B(n,p). That is, p-hat = B(n,p)/n. That's how we get the proportion of successes - divide the number of successes, X, by the number of trials, n.

So, by the properties of scaling a random variable by the factor 1/n, the expected value E(p-hat)=(1/n)E(X) and the variance V(p-hat)=(1/n^2)V(X).

Thus, the standard deviation for p-hat is given by the square root of (1/n^2)V(X)

Recall, the mean and variance for the binomial random variable are, np and np(1-p), respectively. Hence the variance for p-hat is...

V(p-hat) = np(1-p)/n^2,

so that, the standard deviation for p-hat is...

sqrt(np(1-p)/n^2) = sqrt(p(1-p)/n) as shown in the video.

Hope this helped,

with kind regards...(14 votes)

- I remember another condition where something (sample size maybe?) had to be at equal or greater than 30. What was that?(11 votes)
- It is the "large enough" condition - when we are calculating for means, we don't have a p-value, so we can't calculate np and nq. Instead, we check if n > 30. If so, it meets the large enough condition in place of the success/failure condition.(13 votes)

- Here we've approximated the standard deviation of the sample proportion by taking the formula sigma_p_hat = sqrt(p(1-p)/n) and just replacing p by p_hat to get sqrt(p_hat(1-p_hat)/n).

But in one of the videos earlier, we instead used sigma_p_hat = sigma/sqrt(n) and replaced the population standard deviation sigma with the sample standard deviation s to get s/sqrt(n).

These two formulas give different results, because s/sqrt(n) = sqrt(p_hat(1-p_hat)/(n-1)) due to the Bessel correction factor.

Which of these two approximations is best? I'm guessing the second one?(7 votes)- The choice between the two approximations depends on the context and the specific characteristics of the data. The formula sqrt(p(1-p)/n) is a theoretical approximation based on the assumption of a large sample size and is commonly used in theoretical statistics. On the other hand, s/sqrt(n) with the Bessel correction factor (n-1) is an empirical estimate based on the sample standard deviation (s) and is used when the sample size is small relative to the population size. In general, if the sample size is large and the population size is much larger than the sample size, the first approximation may be more appropriate. If the sample size is small relative to the population size, the second approximation with the Bessel correction factor may be more accurate.(1 vote)

- it talks about significance test, these are yet to be explained in this course right?(5 votes)
- Can someone show me this proof for the normal condition or reference a link?

All I can find is information about the 10% rule(4 votes) - would i be able to apply this to video game stat distributions?(3 votes)
- Yes, you can apply these concepts to analyze distributions in video games, particularly if you are interested in understanding player behavior or performance based on sampled data. However, you would need to ensure that the assumptions underlying the statistical methods are appropriate for the context of the video game data.(1 vote)

- What is the difference between the standard error of the mean(sigma^2/n) and the standard error of the sample proportion mentioned above? Thanks!(3 votes)
- Same difference. For Bernoulli distribution sigma^2 = p * (1 - p)(1 vote)

- Isn't "not being independent" would also affect the sampling distribution of the sample mean ?(1 vote)
- Yes it would! I'm fairly sure the CLT assumes that the instances in the samples that you're taking the mean of are independent.

Also, the formula for the SD of the sampling distribution of the sample mean would not work if our instances aren't independent.(3 votes)

- If our data does not meet normality, randomness, and independence conditions for statistical inference, what is the consequence? Can you still technically make inference if you do not meet one or more of this conditions?(1 vote)
- if you dont do the conditions or it doesnt meet all the standards which would rarely ever happen in a class then its basically saying you cant carry out the significance test, confidence interval, etc(2 votes)

- The lesson says that the independence condition must be met for us to be able to use the standard deviation formula for the sample proportion. Does this mean that the other two conditions (the normal and random conditions) do not necessarily need to be met?(1 vote)
- While meeting the independence condition is necessary for using the standard deviation formula for the sample proportion, the other conditions (random and normal) are also important for the validity of the inference. The random condition ensures that the sample is representative of the population, while the normal condition ensures that the sampling distribution is approximately normal, which is necessary for constructing accurate confidence intervals or performing significance tests. Therefore, all three conditions should ideally be met for valid inference.(2 votes)