If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Conditions for valid confidence intervals for a proportion

There are three conditions we need to satisfy before we make a one-sample z-interval to estimate a population proportion. We need to satisfy the random, normal, and independence conditions for these confidence intervals to be valid.

Want to join the conversation?

  • male robot hal style avatar for user tranhungkcn
    as the first simulation, as I understand, that population is 250, and sample size is 200, without replacement (mean we will not put the gumball back to machine). How can we have many sample? With population 250, and sample 200, I think we only have 1 sample?
    (6 votes)
    Default Khan Academy avatar avatar for user
  • aqualine ultimate style avatar for user alexiawpy
    Hi guys, here @ Sal mentioned to get a normally distributed sample distribution of sample mean, we will need to have at least 10x successes and 10x failures in each sample. However, in one of the previous exercises, the minimal sample size is said to be 30x if we want to have a normal distribution. These two are contradicting each other. Any advice on this?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • duskpin ultimate style avatar for user book
      They actually aren't contradicting. The sample size needs to be at least 30, so n>=30, and there needs to be at least 10 successes and 10 failures or p and 1-p, in the sample. So, remember that np>=10 and n(1-p)>= 10, which means the proportion of successes times the sample size needs to be greater than 10 and the number of failures times the sample size needs to be greater than 10. Multiplying the proportion of successes/ failures by the sample size basically gives you the number of successes/failures in a sample. Heres an example to see how they relate:

      Sample size: 50
      Proportion of successes: 0.4
      Proportion of failiures:0.6 (or 1-0.4)

      n(p)=?
      50(0.4)=20

      n(1-p)=?
      50(0.6)=30

      Now look, we can take the number of successes/ failures to find the proportion of successes/failures in the sample:

      20/50= 0.4
      0.4=p

      30/50=0.6
      0.6= 1-p

      So essentially, we need to first check that the sample size is larger than 30. And if that is met, then we check if the number of successes/ failures in a sample are more than 10. If not then the sample would probably not be normal.
      (10 votes)
  • winston baby style avatar for user Yash Singh
    How do you access the gumball simulation?
    (5 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user hanswang1108
    What's the normal condition for a non-Bernoulli distribution?
    (4 votes)
    Default Khan Academy avatar avatar for user
    • leaf green style avatar for user Yan K
      Mainly the sample size (n) which has to be bigger than 30. Since any distribution can be turned into a normal distribution as long as the sample size is large enough according to the Central Limit Theorem
      (3 votes)
  • leaf yellow style avatar for user Xi BU
    I can't understand why in normal condition, we should expect more than 10 success and failure each. If the precondition for normal distribution of mean of sample proportion is np>=5 and n(1-p)>=5 in a sample, why the number of success and failure have to be more than 10 in samples? Is that means we have to conduct at least 2 samples?
    (4 votes)
    Default Khan Academy avatar avatar for user
  • aqualine ultimate style avatar for user rdeyke
    The independence condition is unintuitive to me. Shouldn't the sample parameters approach the population parameters as the sample proportion approaches 100%? Wouldn't that mean that the only consequence of not meeting the independence condition is that our estimates of the population parameters become more accurate than expected? How is getting "too accurate" estimates ever a problem in real life?

    (Intuitive, if polling ten people produces more accurate results than polling one person ten times, then replacement when sampling can only ever decrease the accuracy of a poll.)
    (2 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user ilya112358
      You are comparing samples of different size (1 and 10). Indeed, the bigger the sample size, the closer to the population mean the sample mean is expected to be.

      The problem lies elsewhere. Since we calculate our confidence intervals in the number of stddevs from the mean, it is important for the stddev of our sample to be an unbiased estimate of the stddev of the population.

      The stddev of the sample with replacement is such an estimate. But the stddev of the sample without replacement is not, it is actually smaller. So, when we claim with 95% confidence that the population mean is not farther than 2 stddevs away from the sample mean and calculate that distance using the stddev of the sample without replacement, we are falling short, the interval is smaller than it's supposed to be.

      Intuitively, the bigger the sample, the closer we are to the mean, but the less confident we are about how close :)
      (4 votes)
  • blobby green style avatar for user swimmer737
    I want to find more information about normal condition. Is there anyone who knows the search word or key word?
    (3 votes)
    Default Khan Academy avatar avatar for user
  • winston baby style avatar for user Yash Singh
    In the 10% rule when Khan says _n_<10% of the population, isn't it supposed to include 10% itself?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • leaf green style avatar for user Aarya
    Why does the margin of error CHANGE? For example, if we want 95% confidence intervals, and we take samples of size n = 10, they would all be the same length for that study; margin of error = (critical value)*(stdev), say (2)(4.5) if we want to cover 2 stdevs from either side of p hat, where stdev = 4.5. Wouldn’t this value of margin of error (2*4.5=9) (the “stems” on either side of p hat) be the SAME for ALL confidence intervals for that study? Thanks!
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user cjbuser1234
    I'm trying to recreate the simulation at in Python.


    for s in [.1, .2, .3, .4, .5, .6, .7, .8, .9, 1.]:
    c=0
    for i in range(1500):
    p = .6
    N = 250
    # create list of Bernoulli trials
    population = np.random.binomial(1,p, N)
    ix = list(range(N))
    # random sample of s percent of population
    test_ix = np.random.choice(ix, int(N*s), replace=False)
    test_x = population[test_ix].sum()

    lower, upper = proportion_confint(test_x, int(N*s) )
    # is true P in CI
    if lower <= p <= upper:
    c+=1
    # "hit rate"
    print("Prc: %s, Hit Rate: %s " % (s, (c / 1500)))



    Output


    Prc: 0.1, Hit Rate: 0.9486666666666667
    Prc: 0.2, Hit Rate: 0.9506666666666667
    Prc: 0.3, Hit Rate: 0.9486666666666667
    Prc: 0.4, Hit Rate: 0.9553333333333334
    Prc: 0.5, Hit Rate: 0.95
    Prc: 0.6, Hit Rate: 0.9466666666666667
    Prc: 0.7, Hit Rate: 0.942
    Prc: 0.8, Hit Rate: 0.9386666666666666
    Prc: 0.9, Hit Rate: 0.962
    Prc: 1.0, Hit Rate: 0.9533333333333334


    I'm not getting similar results. Any ideas what's going on? I think my code is correct. Am I getting something wrong with the theory?
    (1 vote)
    Default Khan Academy avatar avatar for user

Video transcript

- [Instructor] What we're going to do in this video is dig a little bit deeper into confidence intervals. In other videos, we compute them, we even interpret them, but here we're gonna make sure that we are making the right assumptions so that we can have confidence in our confidence intervals or that we are even calculating them in the right way or in the right context. So just as a bit of review, a lot of what we do in confidence intervals is we're trying to estimate some population parameter. Let's say it's the proportion, maybe it's the proportion that will vote for a candidate. We can't survey everyone so we take a sample. And from that sample, maybe we calculate a sample proportion. And then using this sample proportion, we calculate a confidence interval on either side of that sample proportion. And what we know is that if we do this many, many, many times, every time we do it we are very likely to have a different sample proportion. So that'd be sample proportion one, sample proportion two. And every time we do it, we might get, maybe this is sample proportion two, not only will we get a different, I guess you can say center of our interval, but the margin of error might change because we are using the sample proportion to calculate. But the first assumption, that has to be true. And even (mumbles) any claims about this confidence interval with confidence, is that your sample is random. So that you have a random sample. If you're trying to estimate the proportion of people that are gonna vote for a certain candidate but you are only surveying people at a senior community, well, that would not be a truly random sample, if we were only to survey people on a college campus. So like with all things with statistics, you really wanna make sure that you're dealing with a random sample and take great care to do that. The second thing that we have to assume, and this is sometimes known as the normal condition, normal condition. Remember, the whole basis behind confidence intervals is we assume that the distribution of the sample proportions, the sampling distribution of the sample proportions, has roughly a normal shape like that. But in order to make that assumption that it's roughly normal, we have this normal condition. And the rule of thumb here is that you would expect per sample more than 10 successes, successes, successes, and failures each, each. So for example, if your sample size was only 10, let's say the true proportion was 50% or 0.5, then you wouldn't meet that normal condition because you would expect five successes and five failures for each sample. Now, because usually when we're doing confidence intervals we don't even know the true population parameter, what we would actually just do is look at our sample and just count how many successes and how many failures we have. And if we have less than 10 on either one of those, then we are going to have a problem. So you wanna have at least greater than or equal to 10 successes or failures on each. And you actually don't even have to say expect because you're going to get a sample and you could just count how many successes and failures you have. If you don't see that, then the normal condition is not met and the statements you make about your confidence interval aren't necessarily going to be as valid. The last thing we wanna really make sure is known as the independence condition. Independence condition. And this is the 10% rule. If we are sampling without replacement, and sometimes it's hard to do replacement. If you're surveying people who are exiting a store, for example, you can't ask them to go back into the store or it might be very awkward to ask to go back in the store. And so the independence condition is that your sample size, so sample, let me just say n, n is less than 10% of the population size. And so let's say your population were 100,000 people. If you surveyed 1,000 people, well, that was 1% of the population so you'd feel pretty good that the independence condition is met. And once again, this is valuable when you are sampling without replacement. Now, to appreciate how our confidence intervals don't do what we think they're gonna do when any of these things are broken, and I'll focus on these latter two. The random sample condition, that's super important frankly in all of statistics. So let's first look at a situation where independence condition breaks down. So right over here, you can see that we are using our little gumball simulation. And in that gumball simulation, we have a true population proportion, but someone doing these samples might not know that. We're trying to construct confidence interval with 95% confidence level. And what we've set up here is we aren't replacing. So every member of our sample, we're not looking at it and putting it back in. We're just gonna take a sample of 200. And I've set up the population so that it's a far larger than 10% of the population. And then when I drew a bunch of samples, so this is a situation where I did almost 1500 samples here of size 200, what you can see here is the situations where our true population parameter was contained in the confidence interval that we calculated for that sample. And then you see in red the ones where it's not. And as you can see, we are only having a hit so to speak. The overlap between the confidence interval that we're calculating in the true population parameter is happening about 93% of the time. And this is a pretty large number of samples. If it's truly at a 95% confidence level, this should be happening 95% of the time. Similarly, we can look at a situation where our normal condition breaks down. And our normal condition, we can see here that our sample size (mumbles) is 15. Actually, if I scroll down a little bit, you can see that the simulation even worries me. There are fewer than 10 expected successes. And you can see that when I do, once again I did a bunch of samples here. I did over 2,000 samples. Even though I'm trying to set up these confidence intervals that every time I compute it, that I've overtime, that there's kind of a 95% hit rate so to speak, here's only a 94% hit rate. And I've done a lot of samples here. And so the big takeaway, not being random will really skew things, but if you don't feel good about how normal the actual sampling distribution of the sample proportions are or if your sample size is a fairly large chunk of your population and you're not replacing and you're violating the independence condition, then your confidence level that you think you're computing for when you make your confidence intervals might not be valid.