The normal distribution (often referred to as the "bell curve" is at the core of most of inferential statistics. By assuming that most complex processes result in a normal distribution (we'll see why this is reasonable), we can gauge the probability of it happening by chance.
To best enjoy this tutorial, it is good to come to it understanding what probability distributions and random variables are. You should also be very familiar with the notions of population and sample mean and standard deviation.
In this tutorial, we experience one of the most exciting ideas in statistics--the central limit theorem. Without it, it would be a lot harder to make any inferences about population parameters given sample statistics. It tells us that, regardless of what the population distribution looks like, the distribution of the sample means (you'll learn what that is) can be normal.
Good idea to understand a bit about normal distributions before diving into this tutorial.
We all have confidence intervals ("I'm the king of the world!!!!") and non-confidence intervals ("No one loves me"). That is not what this tutorial is about.
This tutorial takes what you already know about the central limit theorem, sampling distributions, and z-scores and uses these tools to dive into the world of inferential statistics. It may seem magical at first, but from our sample, we can now make inferences about the probability of our population mean actually being in an interval.
Ever wondered what pollsters are talking about when they said that there is a 3% "margin of error" in their results. Well, this tutorial will not only explain what it means, but give you the tools and understanding to be a pollster yourself!
This tutorial helps us answer one of the most important questions not only in statistics, but all of science: how confident are we that a result from a new drug or process is not due to random chance but due to an actual impact.
If you are familiar with sampling distributions and confidence intervals, you're ready for this adventure!
You're already familiar with hypothesis testing with one sample. In this tutorial, we'll go further by testing whether the difference between the means of two samples seems to be unlikely purely due to chance.
You've gotten good at hypothesis testing when you can make assumptions about the underlying distributions. In this tutorial, we'll learn about a new distribution (the chi-square one) and how it can help you (yes, you) infer what an underlying distribution even is!
You already know a good bit about hypothesis testing with one or two samples. Now we take things further by making inferences based on three or more samples. We'll use the very special F-distribution to do it (F stands for "fabulous").