If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content
Current time:0:00Total duration:9:45
AP.STATS:
UNC‑5 (EU)
,
UNC‑5.B (LO)
,
UNC‑5.B.1 (EK)
,
UNC‑5.B.2 (EK)
,
UNC‑5.B.3 (EK)

Video transcript

what we are going to do in this video is talk about the idea of power when we are dealing with significance tests and power is an idea that you might encounter in a first-year statistics course it turns out that it's fairly difficult to calculate but it's interesting to know what it means and what are the levers that might increase the power or decrease the power in a significance test so just to cut to the chase power is a probability you can view it as the probability that you're doing the right thing when the null hypothesis is not true and the right thing is you should reject the null hypothesis if it's not true so it's the probability of rejecting rejecting your null hypothesis given that the null hypothesis is false so you could view it as a conditional probability like that but there's other ways to conceptualize it we can connect it to type 2 errors for example you could say this is equal to 1 minus the probability of not rejecting 1 minus the probability of not rejecting not rejecting the null hypothesis given that the null hypothesis is false and this thing that I just described not rejecting the null hypothesis given the null hypothesis is false this is that's the definition of a type type 2 error so you could view it as just the probability of not making a type 2 error or 1 minus the probability of making a type 2 error hopefully that's not confusing so let me just write it the other way so you could say it's the probability of not making not making a type type 2 error so what are the things that would actually drive power and to help us conceptualize that I'll draw two sampling distributions one if we assume that the null hypothesis is true and one where we assume that the null hypothesis is false and the true population parameter something different than the null hypothesis is saying so for example let's say that we have a null hypothesis that our population mean is equal to let's just call it mu 1 and we have an alternative hypothesis so H sub a that says hey know the population mean is not equal to mu 1 so if you assumed a world where the null hypothesis is true so I'll do that in blue so if we assume the null hypothesis is true what would be our sampling distribution remember what we do in significance tests is we have some form of a population let me draw that you have a population right over here and our hypotheses are making some statement about a parameter in that population and to test it we take a sample of a certain size we calculate a statistic in this case we would be the sample mean and we say if we assume that our null hypothesis is true what is the probability of getting that sample statistic and if that's below a threshold which we call a significance level we reject the null hypothesis and so that world that we have been living in one way to think about it in a world where you assume the null hypothesis is true you might have a sampling distribution that looks something like this the null hypothesis is true then the center of your sampling distribution would be right over here at mu1 and given your sample size you would get a certain sampling distribution for the sample means if your sample size increases this will be narrower if it decreases this thing is going to be wider and you set a significance level which is essentially your probability of rejecting the null hypothesis even if it is true you could even view it as and we've talked about it you can view your significance level as a probability of making a type 1 error so your significance level is some area and so let's say it's this area that I'm shading in orange right over here that would be your significance level so if you took sample right over here and you calculated its sample mean and you happen to fall in this area or this area or this area right over here then you would reject your null hypothesis now if the null hypothesis actually was true you would be committing a type 1 error without knowing about it but for power we are concerned with a type 2 error so in this one it's a conditional probability that our null hypothesis is false and so let's construct another sampling distribution in the case where our null hypothesis is false so let me just continue this line right over here and I'll do that so let's imagine a world where our null hypothesis is false and it's actually the case that our mean is mu 2 and say let's say that mu 2 is right over here and in this reality our sampling distribution might look something like this for once again it'll be for a given sample size the larger the sample size the narrower this bell curve would be and so it might look something like this so in which situation so in this world we should be rejecting the null hypothesis but what are the samples in which case we are not rejecting the null hypothesis even though we should well we're not going to reject the null hypothesis if we get samples and if we get a sample here or a sample here or a sample here a sample where if you assume the null hypothesis is true the probability isn't that unlikely and so the probability of making a type 2 error when we should reject the null hypothesis but we don't is actually this area right over here and the power the probability of rejecting the null hypothesis given that it's false so given that it's false would be this red distribution that would be the rest of this area right over here so how can we increase the power well one way is to increase our alpha increase our significance level if we increase our significance level say from that remember significance level is an area so if we want it to go up if we increase the area and it looked something like that now by expanding that significance area we have increased the power because now this yellow area is larger we've pushed this boundary to the left of it now you might say oh well hey if we want to increase the power power sounds like a good thing why don't we just always increase alpha well the problem with that is if you increase alpha so let me write this down so if you take alpha your significance level and you increase it that will increase the power that will increase the power but it's also going to increase your probability of a type 1 error because remember that's what one way to conceptualize what alpha is what your significance level is it's a probability of a type 1 error now what are other ways to increase your power well if you increase your sample size then both of these distributions will these sampling distributions are going to get narrower and so if these sampling distributions if both of these sampling distributions get narrower then that situation where you are not rejecting your null hypothesis even though you should is going to have a lot less area there's going to be one way to think about it there's going to be a lot less overlap between these two sampling distributions and so let me write that down so another way is to if you increase n your sample size that's going to increase your power and this in general is always a good thing if you can do it now other things that may or may not be under your control is well the less variability there is in the data set that would also make these sampling distributions narrower and that would also increase the power so less variability and you can measure that as by variance or standard deviation of your underlying data set that would increase your power another thing that would increase the power is if the true parameter is further then what the null hypothesis is saying so if you say true parameter far from null hypothesis what it's saying that also will increase the power so these two are not typically under your control but the sample size is and the significance level is significance level there's a trade-off though if you increase the power through that you're also increasing the probability of a type 1 error so for a lot of researchers they might say hey if a type 2 error is worse I'm willing to make this trade-off I'll increase the significance level but if a type 1 error is actually what I'm afraid of that I wouldn't want to use this lever but in any case increasing your sample size if you can do it is going to be a good thing
AP® is a registered trademark of the College Board, which has not reviewed this resource.