If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## AP®︎/College Statistics

### Course: AP®︎/College Statistics>Unit 10

Lesson 7: Potential errors when performing tests

# Introduction to Type I and Type II errors

Both type 1 and type 2 errors are mistakes made when testing a hypothesis. A type 1 error occurs when you wrongly reject the null hypothesis (i.e. you think you found a significant effect when there really isn't one). A type 2 error occurs when you wrongly fail to reject the null hypothesis (i.e. you miss a significant effect that is really there).

## Want to join the conversation?

• Why is P(type I error) = significance level? What's the logic behind it? • Great question! I hope I can do it justice.

When we choose a significance level, we're saying that we're willing to accept a Type I error occurring with that much probability or, in other words, "that often."

The reason they're the same thing is, when performing a significance/hypothesis test, we are comparing the probability of the outcome we get from our sample (as well as those less likely [altogether, our p-value]) to the significance level that we had set, and that is how we're going to make our choice to reject or fail to reject the null hypothesis.

In other words, the significance level is a probability threshold. Any outcome with a probability less than that threshold will cause us to reject the null hypothesis, regardless of whether it’s true.

In the case when it’s true, that is how often we would be committing a Type I error.

For example, if we set a significance level of 5%, that means we will reject the null hypothesis every time our p-value is less than 5%. If the null hypothesis is true, our p-value will be less than 5% roughly 5% of the times we do the test, and then we will reject the null hypothesis by mistake 5% of the time, and so our Type I error rate (another name for significance level, or alpha) is 5%.

Also, if our significance level is high, then many of the possible outcomes will cause us to reject the null hypothesis; likewise, if our significance level is low, then few of the possible outcomes will cause us to reject the null hypothesis. If the null hypothesis is actually correct, then in all of those cases, we would have committed a Type I error.

Hope this helps. :)
• give a significant level of 0.05, then the chance of rejecting H0 when H0 is true is 0.05, so the chance of fail to reject H0 when H0 is true is 0.95. But what is the probability of rejecting H0 when H0 is false, and what is the probability of fail to reject H0 when H0 is false. • I'm still unsure of how the true parameter relates to Type I and Type II error. • What’s a realistic example of someone deciding between a type 1 and type 2 error? • Wanted to understand how this is related to confusion matrix calculated while doing machine learning (Python specifically) , as the 'correct entries' are in the wrong diagonal in this case.
(1 vote) • Has this happend in real life recently.
(1 vote) • A Type I error occurs when we reject the null hypothesis of a population parameter when the null hypothesis is actually true. But how do we know that the null hypothesis is true, considering that we can never be certain about a population parameter?
(1 vote) • He said P(type 1 error) = α

What's P(type 2 error)?
(1 vote) • If there is no difference between groups can a type 1 or
type 2 occur?
(1 vote) • should I know type 1 and 2 errors for the MCAT?
(1 vote) 