Main content
Course: AP®︎/College Statistics > Unit 10
Lesson 9: Testing for the difference of two population proportions- Hypothesis test for difference in proportions
- Constructing hypotheses for two proportions
- Writing hypotheses for testing the difference of proportions
- Hypothesis test for difference in proportions example
- Test statistic in a two-sample z test for the difference of proportions
- P-value in a two-sample z test for the difference of proportions
- Comparing P value to significance level for test involving difference of proportions
- Confidence interval for hypothesis test for difference in proportions
- Making conclusions about the difference of proportions
© 2024 Khan AcademyTerms of usePrivacy PolicyCookie Notice
Hypothesis test for difference in proportions
Hypothesis test for difference in proportions.
Want to join the conversation?
- I didn't understand why we used combined sample proportion here ? And also what if combined sample proportion was not given? Couldn't we just use both the proportion separately and use the standard error formula(11 votes)
- Yah I'm not exactly sure what he did there, but yes you can just use the standard error formula and use the proportions separately. Both methods give the same exact answer.(5 votes)
- I'm slightly confused why our Z score calculation uses the difference between p^A vs. p^B - when the standard deviation is calculated using p^C (or the combined p^ of A & B samples).
In my mind - we're saying 'is it reasonable that A & B have come from the same sample?'. If they have, the difference between P^A and the estimated overall sample p^ (p^C) will likely not be more than 1 and a bit standard deviations - and that's why we calculate the standard deviation value of a combined sample.
I'm confused why we're then measuring the difference between p^A and p^B and not measuring the difference between p^A and p^C? I understand that my thinking is wrong, I'm just not sure why!(6 votes) - The problem statement does not state that the sampling was random. Is that another reason to conclude that sufficient evidence does not exist here?(2 votes)
- I have a question. To use the formula sig(A-B) = sqrt (sig(A)^2 + sig(B)^2) it is supposed there is no dependency or relationship between A and B.
Besides the 10% rule or independency for each variable.
So, what is the solution if there is some relationship between A and B.(2 votes) - I think there might be a problem here... In the video, Sal writes (.55(.45))/100 twice and doesn't add the variance of the other sample... Is this right or wrong?(2 votes)
- It is right; according to H0 p1=p2=approx 0.55
this yields the same Variance, but only because N1=N2=100.(1 vote)
- Why z = (p_hat_a - p_hat_b)/SE?(1 vote)
- Because it is a test statistic.
Remember the 𝒛 for any test statistic is =
(Estimator﹣Null) / SE
Let's focus on the numerator (Estimator﹣Null):
∙ The "estimator" in this case is the difference between proportions. This is what we are trying to estimate from the question. Thus,
Estimator = p̂₁﹣ p̂₂
∙ The "null" in this case is zero. Because we are assuming both proportions are the same and equal, meaning there is no difference. Thus,
Null = 0
Now to put them all together, the numerator becomes:
Estimator ﹣ Null
(p̂₁﹣ p̂₂) ﹣ 0
Since it is minus 0 at the end we can always leave it out, which is what Sal passed over.
So the numerator just becomes:
p̂₁﹣ p̂₂
As for the denominator, SE (Standard Error) acts as the Standard Deviation which is always the denominator of 𝒛
So now we get the formula:
𝒛 = (p̂₁﹣ p̂₂) / SE(3 votes)
- Hey, to find out significance of a difference in proportion test, don't we first need to calculate the sample size required at some alpha and power ?(1 vote)
- I would argue that the Variance of the Difference of 2 Samples with Size 100 each would result in
2 * p(1-p)/200 instead of
2 * p(1-p)/100 which unnecessarily increases the Variance.
Am I wrong?(1 vote)- I think I know now: because both Samples are actually drawn with a small sample Size. This accounts for their Variance even when the common p is used according to H0.(1 vote)
- Why is the file drawer effect problematic?(1 vote)
- The file drawer effect is problematic because it can cause researchers to publish only studies with significant results, which can lead to inaccurate and possibly biased interpretations of the data, results, and topic being researched.(1 vote)
Video transcript
- [Instructor] We're now going
to explore hypothesis testing where we're thinking about the difference between proportions of
two different populations. So here it says, here are the
results from a recent poll that involved sampling voters from each of two neighboring districts,
District A and District B, and folks were asked
whether they supported a new law or not, and from each district, we took a sample of 100 voters. And then we were able to
calculate the proportion from that sample that supported the law. And then here we have the combined data, including the combined proportion. And we're asked, does this suggest a significant difference
between the two districts. And so this is asking
for a hypothesis test, and the way we would do that is we would set up our null hypothesis, and remember our null
hypothesis is the one that we would assume that
there is no difference. So we would assume that the
true proportion of folks in District A that support
the new law is equal to the proportion in District
B that support the law. Or another way to think about it is that the difference would be equal to zero. And our alternative hypothesis is that the absolute difference between the proportions is not equal to zero. And if we were doing an
all out hypothesis test, we would set a significance level, which we usually denote with an alpha. Often times it might be
a 10% significance level or a 5% significance level. Let's say we set it at
a 5% significance level. And what we would do is, we would say, "All right, let's assume that
the null hypothesis is true, "and assuming the null hypothesis is true, "what is the probability of
getting a difference between our "sample proportions this extreme or more? "And if that probability is
less than our significance "level, then we reject
the null hypothesis, "which would suggest the alternative." Now before we go deeper
into our inference, we want to test our
conditions for inference, and we've seen these many times before. You have the Random condition, where you would need
to feel good that both of these samples are truly random. You would have your Normal condition, which is that you would
have at least 10 successes and failures in each of these samples, and we see that we do
indeed have at least 10 successes and at least 10
failures in each of those samples. And then you have your
Independence condition, and the Independence condition, you're either sampling with replacement or you need to feel good that
each of these sample sizes are no more than 10% of
the entire population, and so I guess we will assume that there's at least 1,000 folks in District A and at least 1,000 folks in District B and that will allow us to meet
the Independence condition. And so with that out of the way, let's assume the null hypothesis, and let's just start
thinking about the sampling distribution of the difference between the sample proportions
assuming that null hypothesis. So the first thing I wanna think about is what is going to be the standard deviation of the difference in the
sampling distributions. Well, we have seen in a previous video when we talked about
differences of proportions. We can think about the variance, so the variance of the
sampling distribution. There's a lot of notation here. So the variance, this
is going to be equal to the variance of the sampling distribution of the sample proportion from District A plus the variance of the sampling distribution of the sample proportion from District B. Now in general, you can
figure out the variance of the sampling distribution of a sample proportion with the following formula. We've seen this before. The variance of the sampling distribution of the sample proportion
is going to be equal to our true proportion times one
minus our true proportion, all of that over your sample size. Now in either situation, we don't know the true proportions for
District A or District B. That's why we are in this,
that's why we're even doing this hypothesis test to begin with. But we can try to estimate it. Remember, we're assuming that
the true proportions are equal even though we might
not know what they are. And what is going to be our best estimate of that true proportion if
we assume that District A and District B have no difference in terms of the number of people
who support the new law? Well the best estimate would actually be the combined sample, the combined sample proportion right over here. And so to estimate these values, we use this combined sample proportion in the place of P over here. So we could say that this is going to be our combined sample proportion times one minus our
combined sample proportion, all of that over our sample size. And since we're assuming
that there's no difference between District A and District B, this would also apply to
that right over there. So let me rewrite this again, the standard deviation of
the sampling distribution of the difference of
the sample proportions from District A and District
B is going to be roughly. Remember we weren't able
to calculate it exactly, but we're using this combined proportion as our best estimate. Let me do a big square root
right over here, a big radical. And so underneath that, we
are going to have our estimate of this, which is 0.55 times one minus 0.55, so 0.45, over 100 plus our estimate of this, which is 0.55. It's the same thing again. Times 0.45, and remember
that's because we're assuming the null hypothesis is true. All of that over this sample size. All of that over 100. And now we can get our calculator out to actually calculate it. And so we get the square root of 0.55 times 0.45 divided by 100. Now I could add that whole thing again or I could just multiply by two. So times two is equal to approximately 0.07. So this is going to be approximately equal to 0.07, and now using this, we
can calculate a z-score, and then we could think
about, what's the probability of getting a z-score that extreme or more. And so our z-score, or our z-value, would be equal to the
difference that we got. P hat sub A, minus P hat sub B, all of that over our estimate of the standard deviation of
the sampling distribution of the difference between
the sample proportions, so all of that over 0.07. Now this up here in
yellow, 0.58 minus 0.52, this is going to be equal to 0.06 over 0.07. 0.07. We can get our calculator
out for this again, and so we have 0.06 divided by 0.07 is going to be, it's approximately 0.86, so this is approximately 0.86. Now what's the probability of getting something this extreme or more extreme? Let me just make sure we
can visualize it properly. So if this is our sampling distribution of the difference between
our sample proportions, and we're assuming the null hypothesis, so the mean of our sampling distribution is going to be zero, is
going to be zero right there, we just got a result that is less than a standard
deviation above the mean. So we just got a result. So this is one standard deviation, two standard deviations above the mean. One standard deviation below the mean. Two standard deviations below the mean. We just got a result that is, that puts us right there, and if we asked ourselves,
what's the probability of getting result at least that extreme, we would say, okay, it would
be, what's the probability of getting a result all of
this area right over here, and would also be what's this area on the other side of the mean. And we know that this is over 30% because even if you just
exclude one standard deviation above and below the mean, if
you say anything more extreme than that, so if you put
this area and this area, you're looking at roughly 31 or 32%. So the probability of getting something at least this extreme
is going to be over 30%, and so it's definitely going to be higher than our significance level. It's actually completely reasonable to get a difference this extreme if we assume the null hypothesis is true. In future videos, we can go even deeper, where we can actually just look this up on a z-table to calculate
these areas more precisely, and we can compare them
to the significance level, but here it's not even close. We're nowhere close to being able to reject the null hypothesis. So to answer the question,
does this suggest a significant difference
between the two districts? No, no it doesn't.