If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Comparing P-values to different significance levels

AP.STATS:
DAT‑3 (EU)
,
DAT‑3.A (LO)
,
DAT‑3.A.1 (EK)
,
DAT‑3.A.2 (EK)
,
DAT‑3.B (LO)
,
DAT‑3.B.1 (EK)
,
DAT‑3.B.2 (EK)
,
DAT‑3.B.3 (EK)
,
VAR‑6 (EU)
,
VAR‑6.G.4 (EK)
Example comparing P-values to different significance levels, and why it's important to set the significance level before a test.

Want to join the conversation?

  • blobby green style avatar for user ju lee
    how do we choose significant level? is there anything that we should use to determine whether to choose a significant level of 0.05, 0.01 or any other value?
    (10 votes)
    Default Khan Academy avatar avatar for user
    • primosaur seed style avatar for user Ian Pulizzotto
      Good question!

      It is common to use significance level 0.05, but 0.01 is sometimes used. It depends on the type of real-life situation. The statistician would need to evaluate whether or not lowering the significance level in order to decrease the probability of making a Type I error (the error of rejecting H0 when it's actually true) is worth the cost of increasing the probability of making a Type II error (the error of failing to reject H0 when H0 is false).
      (22 votes)
  • duskpin ultimate style avatar for user Akshay L Aradhya
    Im getting a different p-value of 0.01586... compared to the 0.036 in the video.

    Isnt the formula to calculate p-value

    (0.5^100) * (100 choose 59)

    And that turns out to be (WolframAlpha Link) : https://bit.ly/2xEnp10
    (15 votes)
    Default Khan Academy avatar avatar for user
  • mr pants teal style avatar for user Dirk van Zuijlen
    How is the p-value of 0.036 calculated?? time 3.14
    (9 votes)
    Default Khan Academy avatar avatar for user
  • winston baby style avatar for user Yash Singh
    How do you assume the null hypothesis is true? Do you just say that to yourself? Thanks in advance.
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Brzoskwinia
    I've tried to calculate that p-value and got different result (0.044313). Here's my question on StackExchange with description of how I got that result: https://math.stackexchange.com/questions/3795024/how-to-manually-calculate-the-p-value-of-getting-at-least-59-heads-in-100-coin-f
    Can anyone help?
    (2 votes)
    Default Khan Academy avatar avatar for user
    • aqualine ultimate style avatar for user gmbushyeager
      n(sample size)=100, p1=.59, p2=.41
      st error (of a proportion) = sqrt[(.59)(.41)/100]
      z score = (observed-expected rate)/st error
      or
      z score = (0.59-0.5)/st error = 1.83

      Using the z table to look up 1.83 you'll find your answer (will be slightly off because of rounding). You'll have to subtract the looked-up value from 1.
      (4 votes)
  • blobby green style avatar for user Jose  Sifuentes
    How do you calculate the p-value itself, I didn't find a part in the videos that explained what to actually plug into what equations in this lesson.
    (3 votes)
    Default Khan Academy avatar avatar for user
  • boggle green style avatar for user Tuan Ha
    Why does the p-value represent probability of the proportion which is 59% or greater, but not just solely 59%?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Minsuk Kang
    What if my result is p_hat = 40 / 100? I would assume that p-value should be small enough.
    Hence, it would reject the null hypotheses, but it definitely does not suggest alternative hypothesis.
    What should I say in this case?
    Can alternative hypothesis is inverse of null hypothesis? In this case Ha: p != 0.5
    (2 votes)
    Default Khan Academy avatar avatar for user
  • aqualine sapling style avatar for user JarrettSiebring
    If you reject your null hypothesis does that mean you 100% have to use the alternative hypothesis?
    (1 vote)
    Default Khan Academy avatar avatar for user
  • leaf green style avatar for user thegreenegirl
    How are confidence intervals related to significance levels
    (1 vote)
    Default Khan Academy avatar avatar for user

Video transcript

- [Instructor] What we're going to do in this video is talk about significance levels which are denoted by the Greek letter alpha and we're gonna talk about two things, the different conclusions you might make based on the different significance levels that you might set and also why it's important to set your significance levels ahead of time, before you conduct an experiment and calculate the p-values, for, frankly, ethical purposes. So to help us get this, let's look at a scenario right over here which tells us Rahim heard that spinning, rather than flipping, a penny raises the probability above 50% that the penny lands showing heads. That's actually quite fascinating if that's true. He tested this by spinning 10 different pennies 10 times each, so that would be a total of a hundred spins. His hypotheses were, his null hypothesis is that by spinning, your proportion doesn't change rather versus flipping, it's still 50% and his alternative hypothesis is that by spinning, your proportion of heads is greater than 50%, where p is the true proportion of spins that a penny would land showing heads. In his 100 spins, the penny landed showing heads in 59 spins. Rahim calculated that the statistic, so this is the sample proportion here, it's 59 out of a hundred were heads so that's 0.59 or 59 hundredths, and he calculated had an associated p-value of approximately 0.036. So based on this scenario, if ahead of time, Rahim had set his significance level at 0.05, what conclusions would he now make? And while you're pausing it, think about how that may or may not have been different if he set his significance levels ahead of time at 0.01. Pause the video and try to figure that out. So let's first of all remind ourselves what a p-value even is. You could view it as the probability of getting a sample proportion at least this large if you assume that the null hypothesis is true. And if that is low enough, if it's below some threshold, which is our significance level, then we will reject the null hypothesis. And so in this scenario, we do see that 0.036, our p-value is indeed less than alpha. It is indeed less than 0.05 and because of that, we would reject the null hypothesis. And in everyday language, rejecting the null hypothesis is rejecting the notion that the true proportion of spins that a penny would land showing heads is 50%. And if you reject your null hypothesis, you could also say that suggests our alternative hypothesis that the true proportion of spins that a penny would land showing heads is greater than 50%. Now what about the situation where our significance level was lower? Well in this situation, our p-value, our probability of getting that sample statistic if we assumed our null hypothesis were true, in this situation, it's greater than or equal to, and it's greater than in this particular situation, than our threshold, than our significance level. And so here, we would say that we fail to reject our null hypothesis so we're failing to reject this right over here and it will not help us suggest our alternative hypothesis. And so because of the difference between what you would conclude given this change in significance levels, that's why it's really important to set these levels ahead of time because you could imagine it's human nature, if you're a researcher of some kind, you want to have an interesting result. You want to discover something, you want to be able to tell your friends, hey, my alternative hypothesis it actually is suggested, we can reject the assumption, the status quo. I found something that actually makes a difference and so it's very tempting for a researcher to calculate your p-values and then say oh, well maybe no one will notice if I then set my significance values so that it's just high enough so that I can reject my null hypothesis. If you did that, that would be very unethical. In future videos, we'll start thinking about the question of okay, if I'm doing it ahead of time, if I'm setting my significance level ahead of time, how do I decide to set the threshold? When should it be one-hundredths? When should it be five-hundredths? When should it be 10-hundredths? Or when should it be something else?