Hierarchical regression - Insignificant f change but significant predictor

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Hierarchical regression - Insignificant f change but significant predictor

mindpower
This post was updated on .
Hello,

first of all, I'm no statistics mastermind so please excuse any unprofessional expressions.

I want to run a hierarchical linear regression with 4 models:

Model 1 - Baseline model, includes several control variables only (9 controls to be precise)
Model 2 - 3 predictors are introduced; The predictors are labelled FAMOS (continuous), FMGMT (dummy), IIINF (continuous)
Model 3 - FAMOS is being squared and introduced into the model
Model 4 - An interaction between FAMOS and FMGMT is introduced

I now have the following problem:

Model 1: adjusted r squared is 0.145 and the model is significant
Model 2: adjusted r squared now is 0,164 but the f change is insignificant
Model 3: adjusted r squared decreases to 0,160, f change is insignificant
Model 4: adjusted r squared increases to 0,20 and the f change is significant, the model is significant

In Model 2, FAMOS is significant. In Model 4, FAMOS turns insignificant, FMGMT is now significant and the interaction of both is significant. I guess these changes are kind of normal given the interaction effect. However, I'm wondering whether it makes sense to keep model 2 and 3 in the regression. Also, how would one interpret the the significant coefficient FAMOS in model 2 if the f change from model 1 to 2 is insignificant?

Thank you very much in advance!

Best regards
Reply | Threaded
Open this post in threaded view
|

Re: Hierarchical regression - Insignificant f change but significant predictor

Rich Ulrich
I suppose you have no reply here because this is not an SPSS question,
and it is not a very neat design question.  You have read something about
design, because "hierarchical" is a better choice that "stepwise".  But you
still have the problem of "multiple tests" -- for which you need to be clear,
to yourself, as to the importance of various hypotheses.  As a start, I presume
that the "significance" of the original 9 variables is not very important to
discuss.

And, for instance, I *guess* that the inclusion of Model 3 (the squared term)
is there at someone's insistence that the scaling might require it.  I would
want to consider the test of a squared term as a "precaution"; and once it
is shown to be near zero (as, below, where the "adjusted r-squared" actually
decreases), that squared term should be dropped, to eliminate its potential
for confounding.


Modern practice rather dislikes "significant" versus "insignificant" when the
information on exact p-levels is available.  Nominally, 0.051 and 0.049  are
"different" if you merely label them S and NS; realistically, they are close.
I can really say why, but it does bother me a bit to see only "adjusted r-squared"
and not the full R-squared, ever.  I think it is because there are a lot of readers
who would be well-served by the reminder of the relation between the two.
For me, it is true that I would end up paying all my attention to the adjusted
values.

Taking the final question before mentioning the interaction:
The fuller Model 2 is not-quite-0.05; it has to be somewhat close because it
includes enough variance-accounted-for, about 0.019  so that one variable is
"significant", but not enough so that the result is still "significant" when divided
by 3 d.f. for the overall test.  Your hypotheses?  How do the three tests relate? 

If you *intend* the overall test to control for multiple testing, then you do NOT
declare the result of the one variable that is nominally p < 0.05 to be "significant". 

Interaction term, Model 4 is "significant", increase of 0.020 (assuming a typo
that claims 0.200).

Hypotheses?  Was the design set up so that you could test this interaction? 
If this was the purpose of the design, then you don't need to say much at
all about Model 2 separately, since Model 4, Interaction, invokes that model
when you try to explain it.

Or, do you need to look for an overall test on adding 3 items+ interaction,
in order to proclaim "overall significance"?

Consider --
 a) In the ordinary design, one does not interpret an interaction without using
the two main effects.  Thus, you do have to (or "get to") discuss the main effects.
 b) The test on the interaction is going to show, uniquely, its effect as the item
last-entered; the tests on the two main effects are going to arbitrarily be
affected by the choice of coding of the interaction term.  When one item is
dummy coded, you want to know what the two actual regression lines look like,
so that you can describe the effect being tested. 

 - Coding an interaction:  If you compute 
  Interaction= (X1-X1bar)* (X2-X2bar) ,  where x-bar is the respective mean, 
then the interaction will be uncorrelated with X1 and X2:  not confounded.
That is more useful for understanding the magnitude of the effects than
including an interaction that *is*  confounded.

--
Rich Ulrich



> Date: Fri, 19 Jun 2015 06:42:04 -0700

> From: [hidden email]
> Subject: Hierarchical regression - Insignificant f change but significant predictor
> To: [hidden email]
>
> Hello,
>
> first of all, I'm no statistics mastermind so please excuse any
> unprofessional expressions.
>
> I want to run a hierarchical linear regression with 4 models:
>
> Model 1 - Baseline model, includes several control variables only (9
> controls to be precise)
> Model 2 - 3 predictors are introduced; The predictors are labelled FAMOS
> (continuous), FMGMT (dummy), IIINF (continuous)
> Model 3 - FAMOS is being squared and introduced into the model
> Model 4 - An interaction between FAMOS and FMGMT is introduced
>
> I now have to following problem:
>
> Model 1: adjusted r squared is 0.145 and the model is significant
> Model 2: adjusted r squared now is 0,164 but the f change is insignificant
> Model 3: adjusted r squared decreases to 0,160, f change is insignificant
> Model 4: adjusted r squared increases 0,20 and the f change is significant,
> the model is significant
>
> In Model 2, FAMOS is significant. In Model 4, FAMOS turns insignificant,
> FMGMT is now significant and the interaction of both is significant. I guess
> these changes are kind of normal given the interaction effect. However, I'm
> wondering whether it makes sense to keep model 2 and 3 in the regression.
> Also, how would one interpret the the significant coefficient FAMOS in model
> 2 if the f change from model 1 to 2 is insignificant?
>
> ...
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD