why does statistical power decrease with more predictors?

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

why does statistical power decrease with more predictors?

Yuri
When too many predictors are included in one model, statistical power is reduced, undermining all the relations in the model.

Why?
Reply | Threaded
Open this post in threaded view
|

Re: why does statistical power decrease with more predictors?

Rich Ulrich
 ... Not an SPSS question, but, anyway ...

There's a mathematical reason, and there's a design-reason; folks are usually
talking about the mathematical reason, which assumes a fairly small N.

For the math:  Look at the formula for the error term in (say) regression.

When a new predictor produces a F-test on the contribution which is less
than 1.0, the size of the Mean-square error will increase.  If the predictors are
correlated, which they usually are, additional ones after the first few will add
this sort of noise.  As the denominator degrees of freedom are reduced, the
error gets larger and large.  Of course, adding a new predictor that /matters/
(and is independent of the one in our hypothesis) will increase the power, not lower it.

Another perspective on the same thing: the "R-squared by chance" in a regression
is R2=  k/ (N-1)   where k is the number of predictors.  With the assumed normality
for outcome and predictors, you should -- by chance -- "fit" perfectly a model with
N = 16 and k=15, just like you can always draw a straight line to fit two points with
one predictor.  Thus, if your predictors account for half the d.f., then the R2 expected
by chance would be 0.50;  any test being performed must contrast the R2-achieved
to the R2-expected-by-chance.  That is, even a regression achieving R2= 0.50 
(which seems like a lot)  will /not/  be "significant", if 0.50 is not much more chance.

The design reason matters regardless of the N.  That is:  If you have an important
variable, you will decrease its /unique/ contribution when you include other
measures that are correlated with it.   In regression, these are called "partial
regression coefficients" that we are testing -- if you want to be precise about it.
The other usual tests in ANOVA also "control for" the contribution of the other
predictors. 

--
Rich Ulrich

> Date: Thu, 16 Jun 2016 01:31:37 -0700
> From: [hidden email]
> Subject: why does statistical power decrease with more predictors?
> To: [hidden email]
>
> When too many predictors are included in one model, statistical power is
> reduced, undermining all the relations in the model.
>
> Why?
>
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD