If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

R-squared or coefficient of determination

In linear regression, r-squared (also called the coefficient of determination) is the proportion of variation in the response variable that is explained by the explanatory variable in the model. Created by Sal Khan.

Want to join the conversation?

  • spunky sam blue style avatar for user Ethan Dlugie
    How do we know that the squared error of the line is always less than (or equal to) the variance in the y values?
    (39 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user Mohammed Rahman
      Regression line is usually the best fit for a given scatter plot, but if you draw the mean of Y that would just be a horizontal line in that plot, and definitely that horizontal line wouldn't be fit as good as the regression line. So the Total squared error from the mean (SE Y) would deinitely be greater than the total squared error from the regression line (SE Line). Therefore SE Line/ SE Y can never be greater than 1 and hence R^2 can't be negative. Hope that helps.
      (68 votes)
  • blobby green style avatar for user daniel.corazza.89
    Hello everyone. I have some troubles understanding the concepts explained in this video on a deeper level. I would like to discuss some things.
    In this video Sal talks a lot about the "variation in y" and "how much of the variation in y is described". However, I found myself wondering what is really this variation in y, what does it describe? Why do we care about this number?
    I've always thought that the variance (or variation) of something is important when that something has a central tendency, and points tend to scatter randomly around that centre. The variance helps you quantify how much those points scatter around.
    Here, however, we have y's that are positively correlated to the x's, which means that if you pick higher and higher values for x, you also get higher and higher values for y. So, there is really no central tendency for the y values, and in fact, the values you calculate for the "mean_y" and the "variation in y" will vary depending on which x values you choose. If we take points that have higher x's, our mean_y will increase, and if we take points with a wider range in x, our "variation in y" will also increase! So it seems to me that this "variation in y" has really no meaning in this context - it's an arbitrary number that depends on which x values we happen to choose. So why would we care about how much this random number we calculate and call "variation in y" is and how much of it is "explained" - whatever that even means?

    Now, I've managed to explain to myself what's been done here in a different way, and this kind of makes intuitive sense to me. Unlike the variation in y, the Standard Error is a much more significant concept in this context. It measures what's the error that one commits with their estimation of the relation between x and y (regression line).
    The variation in y, as it was defined, measures the error from the mean_y. So, this is equivalent to the error that one commits if they fit the points with a horizontal line y = mean_y. Now that makes sense to me, it's what one would do if they had no better tools for fitting lines to points than saying "we want to fit a line to a bunch of points? Hey, why don't we just take a horizontal line that goes through the mean of the y values we have".
    In fact, y = mean_y is the line of the form y = constant that minimizes the SE. So, it's the best line of its kind. Still, a constant line is the most basic model one could come up with, as a linear function, an exponential function, a quadratic function all can adapt better to points and have more "degrees of freedom" (more parameters to be played with) than a line y = constant. So, SE_y can be seen as the error that is committed by fitting points with the worst - or most basic - model available.
    If we see things this way, SE_line / SE_y kind of measures how much of a better fit we have with our model compared to the most-basic model available.
    Does that not make more intuitive sense?
    (18 votes)
    Default Khan Academy avatar avatar for user
  • leaf green style avatar for user Anshu Dwibhashi
    Why do we minimize r squared instead of r? I mean, why do we minimize the square distance instead of just the distance? Is it just for accuracy or something deeper?
    (7 votes)
    Default Khan Academy avatar avatar for user
  • aqualine seed style avatar for user Madhur Devkota
    Why are we comparing Squared error (S.E. line) with total variation in y (S.E. y) ?
    Thought it seems convincing and logical to compare it with S.E.y, my question is, S.E.y may not be perfectly absolute domain for S.E.line .
    (12 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user shimmichimmi
    Can we say that the higher the value of R2, the greater the probability the model is
    correct?
    and is the most important factor when comparing a model with any others is
    to find the highest R2?
    (5 votes)
    Default Khan Academy avatar avatar for user
    • leaf green style avatar for user fosterz
      Not necessarily. R2 only measures how well a line approximates points on a graph. It is NOT a probability value. How likely a model is correct depends on many things and is the subject of hypothesis testing (covered in future videos). It is possible (and common, even in science) that a linear model describes the data perfectly even though it is the wrong model for whatever process generated the data.
      Say I am trying to model outdoor air temperature over time, but I only measure air temperatures once a day, and only during spring. If the data turns out linear (probably would not be), a best fit line could have a high R2, but the line would not describe 24h variation in temperature caused by night/day cycles. More importantly, the line would predict that temperature increases forever (since it was warming in spring, when we sampled), which clearly is not true, even under the most dire global warming predictions ; ). R2 only matters if you pick the right model and sample at the right resolution.
      (13 votes)
  • blobby green style avatar for user Bharath Rangarajan
    Issue: Variation in x does not refer to the line. Solution: M and b provide the best match to variation in y using a straight line model. Since data is not on a line, a line is not a perfect explanation of the data or a perfect match to variation in y. R-squared is comparing how much of true variation is in fact explained by the best straight line provided by the regression model. If R-squared is very small then it indicates you should consider models other than straight lines.
    (7 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user tbeatty
      If R-squared is close to zero, a line may not be appropriate (if the data is non-linear), or the explanatory variable just doesn't do much explaining when it comes to the response variable (y-variable). In that case, you should consider adding another explanatory variable (multiple regression), or find a new explanatory variable altogether.
      (5 votes)
  • blobby green style avatar for user sherrellbc
    This makes absolutely no sense at all. How does the mean of y (y_bar) subtracted from any given y represent an error? What does the average y (y_bar) represent? What if, as x increases, there IS an upward trend?

    This would make sense if the y value was a constant, say 6. You could measure the total error by taking the difference of each measured y and the value 6. The average, at least to me, really does not represent anything. So, how can a measured value of y over the average of all measured y's represent an error of anything? If the measured y's were for the same x value, then a variation in y could be measured as an error. But if the y has a relationship with x such that it increases as x increases, how does y/y_bar represent error in any sense?

    -----------------------------------------
    For example:

    You are given an unknown resistance. You decide to experimentally determine the resistance of the component by measuring its i-V (current, voltage) curve (response).

    Given that X is voltage, and Y is current, you may measure something like this:

    In an ideal case:
    X = 10V, Y = 1Amp
    X = 20V, Y = 2Amp
    X = 30V, Y = 3Amp
    If you plot this curve, there is quite obviously a linear relationship. And, if you are familiar with Ohm's relationship(LAW, if you like), we have the resistance = 10Ohms.

    -- The point is, as Voltage increases, current increases as well for any constant resistance R. So, we have a positively sloping linear relationship.

    So, from the ideal case above.
    y_bar = 2 Amps.
    So, given what we have in this video:
    The total error associated with our measured values(current, Y), is given by:
    (y1-y_bar)^2 + (y2-y_bar)^2 + (y3-y_bar)^2 = (1-2)^2 + (2-2)^2 + (3-2)^2 = 2

    Given an ideal world, where the resistance was EXACTLY equal to 10Ohms, and we measured precisely the expected values of current needed to resolve this, how can we say that the measured data had a total error associated with our measured values of current equal to 2?
    (2 votes)
    Default Khan Academy avatar avatar for user
    • leaf blue style avatar for user Dr C
      You raised a number of points here, I'll try to address them all:

      > "How does the mean of y (y_bar) subtracted from any given y represent an error?"

      When we say "error" we're really meaning "deviation," specifically, deviation from the mean. Ybar is a measure of center, or a "typical" value, and the deviations of (Y - Ybar) can be used to give us some idea of the spread around that measure of center.

      > "What does the average y (y_bar) represent?"

      It represents the norm, or a "typical" y-value,

      > "What if, as x increases, there IS an upward trend?"

      That would be an indication of positive correlation between X and Y.

      > "This would make sense if the y value was a constant, say 6. You could measure the total error by taking the difference of each measured y and the value 6."

      I think this is where you may be going astray. If the y-value was a constant, like 6, there would be no variability, all the y-values would be 6. Just associate Ybar, the average, with this value of 6 that you conjecture.

      The idea in correlation is to measure above average vs below average for both X and Y. Correlation is looking at when values are above/below average - meaning: higher than normal or lower than normal, and it is looking at this for both X and Y simultaneously. In a sense, it's asking the question "Are larger above-average Y-values associated with above-average X-values?" This is why we care about and need the average.

      In your example with Ohms, you only calculated what we'd call the Sum of Squares for Y: SUM{ (Yi - Ybar)^2 }. This is close to the variance. You obtained a 2 (which is correct). And this IS a measure of variability for Y: not all of the Y-values are equal, so there is variability among the Y's! When we take X into account, we'd see that we have a deterministic relationship, but if we look at each variable alone, there are differences, and hence variability.
      (11 votes)
  • leaf red style avatar for user Jeffrey Wan
    I don't exactly see why we are comparing SEline to SEy. Why do we care about SEy here?
    (5 votes)
    Default Khan Academy avatar avatar for user
    • leaf green style avatar for user fosterz
      The r-squared coefficient is the percentage of y-variation that the line "explained" by the line compared to how much the average y-explains. You could also think of it as how much closer the line is to any given point when compared to the average value of y. SEy is the total variation in y (sum of squared distances from the mean of y) and tells you the how much the data deviates from the mean of y. The variation in y gives you a baseline by which to judge how much better the best fit line fits the data compared to the y average.
      (4 votes)
  • leaf green style avatar for user Lois Duhourcau
    First of all, I feel this video is genius with the pictural description of the errors vs. mean and errors vs. regression line. However, I am slow, and I am lost at the following step: when we say that SE(Line) shows what is NOT explained by the regression line; and therefore SE(Line)/SE(y) = what % of variation that is not explained by the variation in x.

    A few questions:

    1. Why is the variation in x equivalent to the regression line? Is this just linguistics to say that the regression line is the result of independent variable x to obtain y, so the regression line describes the variation in x and its impact on y?

    2. SE(Line) / SE(y) essentially is equivalent to: the errors of the regression (which are thus not explained by the regression) divided by the sum of the total regression. For that reason, we know what % is not explained by the regression, and then we can deduct what is explained by the regression. Is that the right way to think about it?
    (6 votes)
    Default Khan Academy avatar avatar for user
    • purple pi teal style avatar for user Parsa Abangah
      1. Yes, the variation in x is equivalent to the regression line because the regression line is a function of the independent variable x. The regression line describes the relationship between x and y and summarizes the overall pattern of the data. The variation in x is important to consider because it helps explain the variability in y.

      2. SE(Line)/SE(y) gives the proportion of the total variation in y that is not explained by the regression line. This proportion represents the amount of variation in y that is not accounted for by the variation in x. Therefore, we can say that the remaining proportion of variation in y is explained by the regression line. SE(Line)/SE(y) is essentially a measure of how well the regression line fits the data, and the closer it is to 1, the better the fit.
      (2 votes)
  • blobby green style avatar for user 18huynha
    What is the difference between residuals and error
    (3 votes)
    Default Khan Academy avatar avatar for user

Video transcript

In the last few videos, we saw that if we had n points, each of them have x and y-coordinates. Let me draw n of those points. So let's call this point one. It has coordinates x1, y1. You have the second point over here. It had coordinates x2, y2. And we keep putting points up here and eventually we get to the nth point. That has coordinates xn, yn. What we saw is that there is a line that we can find that minimizes the squared distance. This line right here, I'll call it y, is equal to mx plus b. There's some line that minimizes the square distance to the points. And let me just review what those squared distances are. Sometimes, it's called the squared error. So this is the error between the line and point one. So I'll call that error one. This is the error between the line and point two. We'll call this error two. This is the error between the line and point n. So if you wanted the total error, if you want the total squared error-- this is actually how we started off this whole discussion-- the total squared error between the points and the line, you literally just take the y value each point. So for example, you would take y1. That's this value right over here, you take y1 minus the y value at this point in the line. Well, that point in the line is, essentially, the y value you get when you substitute x1 into this equation. So I'll just substitute x1 into this equation. So minus m x1 plus b. This right here, that is the this y value right over here. That is m x1 b. I don't want to my get my graph too cluttered. So I'll just delete that there. That is error one right over there. And we want the squared errors between each of the points of the line. So that's the first one. Then you do the same thing for the second point. And we started our discussion this way. y2 minus m x2 plus b squared, all the way-- I'll do dot dot dot to show that there are a bunch of these that we have to do until we get to the nth point-- all the way to yn minus m xn plus b squared. And now that we actually know how to find these m's and b's, I showed you the formula. And in fact, we've proved the formula. We can find this line. And if we want to say, well, how much error is there? We can then calculate it. Because we now know the m's and the b's. So we can calculate it for certain set of data. Now, what I want to do is kind of come up with a more meaningful estimate of how good this line is fitting the data points that we have. And to do that, we're going to ask ourselves the question, what percentage of the variation in y is described by the variation in x? So let's think about this. How much of the total variation in y-- there's obviously variation in y. This y value is over here. This point's y value is over here. There is clearly a bunch of variation in the y. But how much of that is essentially described by the variation in x? Or described by the line? So let's think about that. First, let's think about what the total variation is. How much of the total variation in y? So let's just figure out what the total variation in y is. It's really just a tool for measuring. When we think about variation, and this is even true when we thought about variance, which was the mean variation in y. If you think about the squared distance from some central tendency, and the best central measure we can have of y is the arithmetic mean. So we could just say, the total variation in y is just going to be the sum of the distances of each of the y's. So you get y1 minus the mean of all the y's squared. Plus y2 minus the mean of all the y's squared. Plus, and you just keep going all the way to the nth y value. To yn minus the mean of all the y's squared. This gives you the total variation in y. You can just take out all the y values. Find their mean. It'll be some value, maybe it's right over here someplace. And so you can even visualize it the same way we visualized the squared error from the line. So if you visualize it, you can imagine a line that's y is equal to the mean of y. Which would look just like that. And what we're measuring over here, this error right over here, is the square of this distance right over here. Between this point vertically and this line. The second one is going to be this distance. Just right up to the line. And the nth one is going to be the distance from there all the way to the line right over there. And there are these other points in between. This is the total variation in y. Makes sense. If you divide this by n, you're going to get what we typically associate as the variance of y, which is kind of the average squared distance. Now, we have the total squared distance. So what we want to do is-- how much of the total variation in y is described by the variation in x? So maybe we can think of it this way. So our denominator, we want what percentage of the total variation in y? Let me write it this way. Let me call this the squared error from the average. Maybe I'll call this the squared error from the mean of y. And this is really the total variation in y. So let's put that as the denominator. The total variation in y, which is the squared error from the mean of the y's. Now we want to what percentage of this is described by the variation in x. Now, what is not described by the variation in x? We want to how much is described by the variation in x. But what if we want how much of the total variation is not described by the regression line? Well, we already have a measure for that. We have the squared error of the line. This tells us the square of the distances from each point to our line. So it is exactly this measure. It tells us how much of the total variation is not described by the regression line. So if you want to know what percentage of the total variation is not described by the regression line, it would just be the squared error of the line, because this is the total variation not described by the regression line, divided by the total variation. So let me make it clear. This, right over here, tells us what percentage of the total variation is not described by the variation in x. Or by the regression line. So to answer our question, what percentage is described by the variation? Well, the rest of it has to be described by the variation in x. Because our question is what percent of the total variation is described by the variation in x. This is the percentage that is not described. So if this number is 30%-- if 30% of the variation in y is not described by the line, then the remainder will be described by the line. So we could essentially just subtract this from 1. So if we take 1 minus the squared error between our data points and the line over the squared error between the y's and the mean y, this actually tells us what percentage of total variation is described by the line. You can either view it's described by the line or by the variation in x. And this number right here, this is called the coefficient of determination. It's just what statisticians have decided to name it. And it's also called R-squared. You might have even heard that term when people talk about regression. Now let's think about it. If the squared error of the line is really small what does that mean? It means that these errors, right over here, are really small. Which means that the line is a really good fit. So let me write it over here. If the squared error of the line is small, it tells us that the line is a good fit. Now, what would happen over here? Well, if this number is really small, this is going to be a very small fraction over here. 1 minus a very small fraction is going to be a number close to 1. So then, our R-squared will be close to 1, which tells us that a lot of the variation in y is described by the variation in x. Which makes sense, because the line is a good fit. You take the opposite case. If the squared error of the line is huge, then that means there's a lot of error between the data points and the line. So if this number is huge, then this number over here is going to be huge. Or it's going to be a percentage close to 1. And 1 minus that is going to be close to 0. And so if the squared error of the line is large, this whole thing's going to be close to 1. And if this whole thing is close to 1, the whole coefficient of determination, the whole R-squared, is going to be close to 0, which makes sense. That tells us that very little of the total variation in y is described by the variation in x, or described by the line. Well, anyway, everything I've been dealing with so far has been a little bit in the abstract. In the next video, we'll actually look at some data samples and calculate their regression line. And also calculate the R-squared, and see how good of a fit it really is.