If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Statistics and probability

### Course: Statistics and probability>Unit 12

Lesson 2: Error probabilities and power

# Introduction to power in significance tests

Introduction to power in significance tests.

## Want to join the conversation?

• Hey, I don't quite understand why shown two overlapped distribution graphs?
• If the null hypothesis is in fact correct, then the hypothesized and actual sampling distributions are one and the same, centered on μ1. In this event, there is only one distribution. If however the null hypothesis is false, then there must exist a different [second] sampling distribution, centered on μ2, which is likely to overlap to some extent with the hypothetical distribution which was centered on μ1.
• At shouldn't Sal write α/2 marking a rejection region since we are doing a two-tailed test and the total probability corresponding to our significance level is split between two tails?
• How does increasing alpha level increase power? More area for alpha level and less for power right?
(1 vote)
• Because if you draw the 2 curves, where they intersect is like a little triagle. If you fill most of the triangle with alpha (significance level) you end up with little room to make a type II error (accepting h0 when it is false).
• WIll there be a time that a large sample size will have negative effects on statist analysis?
• In some very specific situations that's possible. If you currently have a p-value that would make you draw the correct conclusion, sampling more might change it to one where you would draw the wrong conclusion. But don't use that as a reason to keep your sample small. Larger samples will generally only increase the quality of statistical analyses
(1 vote)
• What is diffrence between power and effect size?
(1 vote)
• Why isn't the power of a test against a specific alternative always equal to 100 percent, even if the specified alternative is clearly different from the null hypothesis value and supports the alternative hypothesis? For example, let's say that the null hypothesis of a population proportion is 0.13. A specified alternative is 0.17. Why is the power of a test against 0.17 not equal to 100 percent?
(1 vote)
• The significance test is the probability of getting the sample result, given the distribution of sample means, were the population mean to equal whatever value is ascribed to it by the null hypothesis. Most importantly the shape of the distribution of sample means, which influences the probability, is affected by your actual sample size, from which your study result is taken. Have I got this right?
(1 vote)
• So then is power contingent on the two graphs intersecting at the a-value?
(1 vote)
• Hello, I want to know whether the graph of the 2 curves where you explain the power and the errors is required to memorize for the AP exam. I mean by memorize that am I required to fully and deeply understand how it works and will I face questions that uses graphs about the errors and powers ??
(1 vote)
• In this video "Not rejecting `H₀` given that it is false", should be, "Not rejecting `H₀` given that it is `μ₂`". If instead you wanted to calculate "Not rejecting `H₀` given that it is false", you would need to integrate over all possible `μ`, given some prior (and this would still involve you assuming the distribution is from the normal family).
(1 vote)