Main content
Statistics and probability
Course: Statistics and probability > Unit 12
Lesson 5: More significance testing videosZ-statistics vs. T-statistics
Sal breaks down the difference between Z-statistics and T-statistics. Created by Sal Khan.
Want to join the conversation?
- At, you draw an arrow to the sample standard deviation and say "if this is small, specifically, less than thirty, you're going to have a T-Statistic. Shouldn't the arrow be pointed at the n? Isn't it if N is under 30? I was unaware that the standard deviation of the sample had any effect on whether or not you use a Z-Test or a T-Test . 4:43(84 votes)
- From the author:Yes, it should point to n, not s.(31 votes)
- In a problem, how do you know when you need to use the Z chart vs the T table(4 votes)
- If you know the standard deviation of the population, use the z-table. If you don't but you have a large sample size (traditionally over 30, but some teachers might go up to 100 these days), then assume that the population standard deviation is the same as the sample standard deviation and use the z-table. But if you don't know the population standard deviation and have a relatively small sample size, then you use the t-table for greatest accuracy.(31 votes)
- Hello, I dont get it... What is the difference between Z and t statistic? It's the same formula, for both, and the graph is not different either. A clue anyone? Thanks!(7 votes)
- The Z-score and t-score tables themselves have different numbers in response to the fact that you can't have as much confidence in the data with a smaller sample size. You'll get a different value from Z=1.382 than t=1.382.(12 votes)
- Why is the mean of the t distribution zero and the mean of the z distribution equal to the population mean?(18 votes)
- Below is what I need to figure out:
On a test whose distribution is approximately normal with a mean of 50 and a standard deviation of 10, the results for three students were reported as follows:
Student Opie has a T-score of 60.
Student Paul has a z-score of -1.00.
Student Quincy has a z-score of +2.00.
Obtain the z-score and T-score for EACH student.
Show your calculations.
Who did better on the test?
How many standard deviation units is each score from the mean? Compare the results of the three students.(0 votes)
- What is the difference between a "normal" distribution and a normalized distribution?(5 votes)
- Good question! They are TOTALLY DIFFERENT!
A normal distribution just means the good old bell curve that you know and love. The "standard" normal distribution being the bell curve with mean 0 and stdev 1, which lets you use your Z-table.
A normalized distribution means any distribution which has a total area (or probability) under it equal to 1. So of course every probability density function (PDF) should be normalized, but sometimes you make up some new shape for a PDF (say, some function f(x)), and you are happy with the shape, but then you calculate the total area under the curve and it's, say, 13. Well, then you have to take the additional step of dividing your new function by 13, so your normalized PDF would be f(x)/13, which would now have a total area of 1 underneath.
Just to be clear, the standard normal distribution is, of course, normalized.(10 votes)
- The comment that for n>30 (xbar-mu)/(s/sqrt(n)) is normal is not correct. The convergence of the CLT depends on how non-normal the population distribution is. For example, consider a Bernoulli trial. The rule of thumb to use the normal approximation is that n*pi>5 and n(1-pi)>5. If pi=1%, then n must exceed 500. n=30 is not large enough.
When n>30 or so the t and the z distribution are approximately equal and textbooks stop giving percentiles of the t distribution in the tables.(6 votes)- I think that for a non-binomial setting, which has more than two outcomes, and deals with averages, you can attain a probability through a Z-statistic so long as n>30. For a binomial setting, like a Bernoulli trial you state as an example, with only two outcomes, and deals with proportions, then the rule of thumb to use normal approximation is indeed np>5 and n(1-p)>5 (or in other sources they use np>10 and n(1-p)>10). So the difference in the "rules of thumb" for normal approximation depends(5 votes)
- I thought z= (x bar) - (mu) / (Standard dev)
You said z= (x bar) - (mu) / (Standard dev/ square root of n)
Can you please explain?(4 votes)- X and X̅ are standardised slightly differently. In both cases, the denominator is the square root of the variance, like so:
For X, Z = (X-μ) / σ
For X̅, Z = (X̅ - μ) / (σ / √n)
This fits with what we know about the central limit theorem. For X, the variance is σ². For X̅, however, the variance is σ²/n, because we expect that X̅ will have a smaller variance (or tend to be closer to the mean) as n increases.(4 votes)
- atSal mentions a "rule of thumb". I'm new to statistics, but I can't seem to find it in my books or these videos. What is the "Rule of Thumb" and is there a video I'm missing? 5:41(3 votes)
- "Rule of thumb" is just an expression, it means a good generalization or a simple way to remember something. There is no single "Rule of Thumb." :P(3 votes)
- What do you do if population variance is known but the sample is less than 30?(3 votes)
- What are some other ways to tell whether a z-statistic or a t-statistic should be used?(1 vote)
- Whether you know the population standard deviation, or only have the sample standard deviation. That is actually the only thing to consider when choosing between and t and z statistics.(5 votes)
Video transcript
I want to use this video
to kind of make sure we intuitively and otherwise and
understand the difference between a Z-statistic--
something I have trouble saying-- and a T-statistic. So in a lot of what we're doing
in this inferential statistics, we're trying to
figure out what is the probability of getting a
certain sample mean. So what we've been doing,
especially when we have a large sample size-- so let
me just draw a sampling distribution here. So let's say we have a sampling
distribution of the sample mean right here. It has some assumed mean value
and some standard deviation. What we want to do is any result
that we get, let's say we get some sample
mean out here. We want to figure out the
probability of getting a result at least as
extreme as this. So you can either figure out the
probability of getting a result below this and subtracted
that from 1, or just figure out this area
right over there. And to do that we've been
figuring out how many standard deviations above the mean
we actually are. The way we figured that out is
we take our sample mean, we subtract from that our mean
itself, we subtract from that what we assume the mean should
be, or maybe we don't know what this is. And then we divide that by the
standard deviation of the sampling distribution. This is how many standard
deviations we are above the mean. That is that distance
right over there. Now, we usually don't know
what this is either. We normally don't know
what that is either. And the central limit theorem
told us that assuming that we have a sufficient sample size,
this thing right here, this thing is going to be the same
thing as-- the sample is going to be the same thing as the
standard deviation of our population divided by
the square root of our sample size. So this thing right over here
can be re-written as our sample mean minus the mean of
our sampling distribution of the sample mean divided by
this thing right here-- divided by our population mean,
divided by the square root of our sample size. And this is essentially our
best sense of how many standard deviations away from
the actual mean we are. And this thing right here, we've
learned it before, is a Z-score, or when we're dealing
with an actual statistic when it's derived from the sample
mean statistic, we call this a Z-statistic. And then we could look it up
in a Z-table or in a normal distribution table to say what's
the probability of getting a value of this
Z or greater. So that would give us
that probability. So what's the probability
of getting that extreme of a result? Now normally when we've done
this in the last few videos, we also do not know what the
standard deviation of the population is. So in order to approximate that
we say that the Z-score is approximately, or the
Z-statistic, is approximately going to be-- so let me just
write the numerator over again-- over, we estimate this
using our sample standard deviation-- let me do this in
a new color-- with using our sample standard deviation. And this is OK if our sample
size is greater than 30. Or another way to think about
it is this will be normally distributed if our sample
size is greater than 30. Even this approximation will
be approximately normally distributed. Now, if your sample size is less
than 30, especially if it's a good bit less than
30, all of a sudden this expression will not be
normally distributed. So let me re-write the
expression over here. Sample mean minus the mean of
your sampling distribution of the sample mean divided by your
sample standard deviation over the square root of
your sample size. We just said if this thing is
well over 30, or at least 30, then this value right here, this
statistic, is going to be normally distributed. If it's not, if this is small,
then this is going to have a T-distribution. And then you're going to do the
exact same thing you did here, but now you would assume
that the bell is no longer a normal distribution, so this
example it was normal. All of Z's are normally
distributed. Over here in a T-distribution,
and this will actually be a normalized T-distribution
right here because we subtracted out the mean. So in a normalized
T-distribution, you're going to have a mean of 0. And what you're going to do is
you want to figure out the probability of getting a T-value
at least this extreme. So this is your T-value you
would get, and then you essentially figure out the area
under the curve right over there. So a very easy rule of thumb
is calculate this quantity either way. Calculate this quantity
either way. If you will have more than 30
samples, if your sample size is more than 30, your sample
standard deviation is going to be a good approximator for your population standard deviation. And so this whole thing is
going to be approximately normally distributed, and so
you can use a Z-table to figure out the probability
of getting a result at least that extreme. If your sample size is small,
then this statistic, this quantity, is going to have a
T-distribution, and then you're going to have to use a
T-table to figure out the probability of getting a T-value
at least this extreme. And we're going to see this
in an example a couple of videos from now. Anyway, hopefully that helped
clarify some things in your head about when to use a
Z-statistic or when to use a T-statistic.