R2 shows how well terms (data points) fit a curve or line. Adjusted R2 also indicates how well terms fit a curve or line, but adjusts for the number of terms in a model. If you add more and more useless variables to a model, adjusted r-squared will decrease. If you add more useful variables, adjusted r-squared will increase.
Adjusted R2 will always be less than or equal to R2
is the number of points in your data sample.
is the number of independent regressors, i.e. the number of variables in your model, excluding the constant.
When adding more predictors, SSE always decreases, always increases. (YOUR SST would stay the same ), so it can’t suggest if the model is significantly improved or not
only measures the reduction in SSE, not the lost in degrees of freedom
Both R2 and the adjusted R2 give you an idea of how many data points fall within the line of the regression equation. However, there is one main difference between R2 and the adjusted R2: R2 assumes that every single variable explains the _variation in the dependent variable. The adjusted R2 tells you the percentage of variation explained by only the independent variables that actually affect the dependent variable
increases with every predictor added to a model. As always increases and never decreases, it can appear to be a better fit with the more terms you add to the model. This can be completely misleading.
Similarly, if your model has too many terms and too many high-order polynomials you can run into the problem of over-fitting the data. When you overfit data, a misleadingly high value can lead to misleading projections.