If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains ***.kastatic.org** and ***.kasandbox.org** are unblocked.

Main content

Current time:0:00Total duration:10:14

in the last couple of videos we first figured out the total variation in these nine data points right here and we got that to be 30 that's our some of our total sum of squares and we asked ourselves how much of that variation is due to variation within each of these groups versus variation between the groups themselves for so for the variation within the groups we had our sum our sum of squares within and there we got six and then the balance of this 30 there's no units here the balance of this variation came from variation between the groups and we actually calculated it we actually calculated it and we got 24 what I want to do in this video is actually use this type of information these essentially these statistics we've calculated to come to do some to do some inferential statistics to come to some type of conclusion or maybe not to come to some type of conclusion and what I want to do is just to put some context around these groups every we've just been dealing with them in the abstract right now but you can imagine that these are kind of the results of some type of experiment let's say that I gave three different types of pills or three different types of food to people picking a test and these are the scores on the test so this is food food one this is food one this is food two this is food two and then this right over here is food three food three and I want to figure out if there's the type of food that people take going into the test does it really affect their scores are the differences in the scores you know if you look at these means it looks like they perform best in group 3 then in group two or then in Group one but is that difference purely random random chance or as am I pretty can I be pretty confident that it's due to actual differences in the population means of all of the people who would ever take food 3 versus food 2 versus food 1 what I want to do is say you know is so my question here is are the means the true population means the same so if the true population means this is a sample mean just based on 3 samples but if I knew the true population mean so my question is is the mean of the population of people taking food one equal to the mean of food to obviously I'll never be able to give that food to every human being that could ever live and then make them all take an exam but we're trying to get a sense of there is some true meaning there it's just not really measurable and so my question is this equal to this equal to the mean three the true population mean three and my question is are these equal is because if they're not equal and that means that the food that the food actually the different foods that you get actually Doug do have some type of impact on how people perform on a test so let's do a little bit of a hypothesis test here so let's say that my null hypothesis let's say that my null hypothesis is that the means are correct are the same our food doesn't make a difference food doesn't make a difference it the food doesn't make a difference and that my alternate hypothesis is that it does it does and a way of thinking about this a little quantitatively is that if it doesn't make a difference the true population means of the groups will be the same so that means the true population mean of the group that took food 1 will be the same as a group that took food 2 which will be the same as the group that took as the group that took food 3 if our alternate hypothesis is correct then these means will not all be the same so how can we how can we test this hypothesis so what we're going to do we're going to assume the null hypothesis is what we always do in our hypothesis testing we're going to assume our null hypothesis and then essentially figure out what is what are the chances of getting a certain statistic this extreme and I haven't even defined with that statistic are so we're going to define we're going to assume our null hypothesis and then we're going to come up with a statistic called the F statistic so our F statistic which has an F distribution and we won't go in real deep into the details of the F distribution but you can already start to think of it as a ratio of two chi-squared distributions that may or may not have different degrees of freedom our F statistic is to be the ratio of our sum of squares between the samples so it's going to be our sum our total sum of I should say sum of squares between divided by divided by our degrees of freedom between and sometimes this is called the mean squares between MSB either way it's going to be that divided by divided by the sum of squares within so that's what I had done up here the sum of squares within in blue divided by the sum of squares within sum of squares within divided by the degrees of freedom of the sum of squares within and that was M M times n minus 1 now let's just think about what this is doing right here if this number if the numerator if the numerator is much larger than the denominator if it's much larger than the denominator then what that tells us is is that the variation in this data is due mostly is due mostly to the differences between the actual means and it's due less due to the actual variation within the means that's if this numerator is much bigger than this denominator over here so that would make us believe that should make us believe that there is a difference in the true population mean so if this number is really big it would should tell us that there's a lower probability that our that our null hypothesis is correct if this number is really small let's say that this is larger let's say that our denominator is larger that means that our variation within each sample makes up more of the total variation than a variation between the samples so that means that our variation within each of these samples is a bigger percentage of the total variation versus the variation between the sample so that would make us believe that hey you know any difference that we actually see in the means is probably just random and that would make it maybe a little harder to reject our null hypothesis so let's just actually calculate it for this so in this case our sum of squares between our sum of squares between we calculated over here was 24 24 and we had two degrees of freedom we had two degrees of freedom and our of squares within our sum of squares within was six and we had how many degrees of freedom we had how many degrees of freedom our degrees of freedom there were also six six degrees of freedom right over there so there's going to be 24 divided by 2 which is 12 divided by 1 divided by 1 so our F statistic that we've calculated is going to be equal to 12 and this stands for Fisher who is the biologist and and a statistician who came up with this so our F statistic is going to be 12 and what we're going to see is this is a pretty high number now one thing I forgot to mention any hypothesis test we need to have some type of significance level and so let's say the significance level that we care about for our hypothesis test is 10% is 0.1 zero which means that if we assume if assuming the null hypothesis there is less than a 10 percent chance of getting the result that we got of getting this F statistic then we will reject the null hypothesis so what we want to do is figure out a critical F statistic value that getting that extreme of a value or greater is 10% and if this is bigger than our critical f statistic value then we're going to reject the null hypothesis if it's less we can't reject the null hypothesis and so I'm not going to go a lot into the guts of the F statistic but you can already appreciate this each of these sum of squares has a chi-squared distribution this has a chi-squared distribution and this has a different-colored distribution this is a chi-squared distribution with two degrees of freedom it's a chi-squared distribution with and we haven't normalized it in all of that but roughly a chi-squared distribution with six degrees of freedom so the F statistic is actually the ratio or the F distribution is the ratio of two chi-squared distributions and I got this this is a screenshot from a professor's course at the UCLA hope they don't mind I actually needed to find an F table for us to look into but this is what an F distribution looks like and obviously it's going to look different depending on the degrees of freedom of the numerator and the denominator there's kind of two degrees of freedom to think about the numerators degree of freedom and the denominators degree of freedom but with that said let's calculate let's calculate the critical F statistic the critical F statistic for alpha is equal to zero point one zero and you're actually going to see different F tables for each different alpha where our numerator degree of freedom is 2 and our denominator degree of freedom is 6 so this table that I got this whole table is for an alpha or significance level of 10% or 0.1 zero and our numerators degree of freedom was 2 and our denominators degree to freedom is 6 so our critical F value is 3 point 4 6 so our F so our critical F value is 3 point 4 6 so this right over here this value right here is 3 point 4 6 the value that we got based on our data is much larger than that way above it it's going to have a very very small p-value the probability of getting something this extreme just by chance assuming the null hypothesis is very low it's far it's way bigger than our critical F statistic with a 10 percent significance level so because of that we can reject we can reject the null hypothesis which leads us to believe that you know what they're actually probably is a difference in the population means which tells us there probably is a difference on the performance on an exam if you give them the different foods