Posted by
Poes, Matthew Joseph on
Aug 07, 2012; 9:08pm
URL: http://spssx-discussion.165.s1.nabble.com/Multicolinearity-tp5714614p5714615.html
Certain authors like Baron and Kenny would argue this resulted because you did your interaction (moderation) analysis incorrectly. They would correctly argue that the same information contained in the interaction term is also contained in the two IV's themselves, and as such are correlated (first explicated in publication by Cohen as I understand it). This correlation will cause a multi-colinearity problem, and the model coefficients would be inaccurate. They would go on to say that by mean centering the IV's the correlation is reduced to the product term of the IV's (the interaction term) and as such you have reduced multicolinearity. More recent research has shown this not to be true, and so with normal OLS regression, your unfortunately stuck in a situation where you can't do what your trying to do and get valid results (which isn't to say that 100's if not 1000's of people don't still do this).
One solution is not to rely on OLS regression methods, and instead turn to a varying parameter model.
Another point to consider is that most people often misinterpret the effects of the IV's, in the presence of the moderator, as a main effect, which has been shown to be incorrect. As Hayes and Matthes discuss, these are actually conditional effects. Your model with no interaction term has your main effects, but your model with the interaction term has the conditional effects of each IV, and the interaction is the difference in the conditional effect for a one point change in the interaction effect. First thing to consider is that you don't have two focal predictors and an interaction between the two, this doesn't fit with theory, and complicates interpretation. You have one focal predict, and a second moderator variable. In your example, you need to choose (based on theory), so let's say its Divergence, and then treat relevance as the proposed moderator variable. To interpret these correctly with regard to the interaction model, .666 is the change in Y for a 1 point in!
crease in Divergence, when relevance is at 0. If you think about this, since you are showing that there is an interaction, then the value of M in this case (relevance) is meaningful, and having it at 0 doesn't make the interpretation of divergence all that useful on its own (unless you interpret it in light of the interaction effect, which is best done with plotting). One advantage of centering the IV's is that the coefficient value for divergence is now it's value when relevance is at the sample mean level. For an average amount of relevance, divergence changes Y by ### amount.
Matthew J Poes
Research Data Specialist
Center for Prevention Research and Development
University of Illinois
510 Devonshire Dr.
Champaign, IL 61820
Phone: 217-265-4576
email:
[hidden email]
-----Original Message-----
From: SPSSX(r) Discussion [mailto:
[hidden email]] On Behalf Of Almost Done
Sent: Tuesday, August 07, 2012 2:59 PM
To:
[hidden email]
Subject: Multicolinearity.
Hey, guys! I'm doing a research about creative advertising and have to check for example whether the divergence (rated on a seven point Lickert scale) and relevance (rated the same) and the interaction between the two divergence*relevance has an effect on the attention that the respondents also rated on a 7 point lickert schale. So when I run a regression this is what I get:
B t
sig
Constant ,529 ,649 ,518
Divergence ,666 4,215 ,000
Relevance ,573 2,275 ,024
Divergence*Relevance -,091 -2,012 ,046
This seemed weird to me because divergence*relevance has a negative influence on the dependent variable attention. How can it be?
So I removed the Divergence*Relevance interaction, and this is what I got:
B t
sig.
Constant 1,892 4,113 ,000
Divergence ,398 4,622 ,000
Relevance ,090 1,167 ,245
So the B and significance changed drastically. I've tested for multicoliniarity using the VIF. The combination where the Divergence was the dependent variable (and independent : Divergence and Divergence*Relevance) was the one where VIF was greater than 5. All the other combinations were fine (VIF was either 1 or slighter greater than 1).
So my question is - what does that mean and how do I proceed? What do I have to do? And how should I explain that? For me it's not important to make a model I just have to see whether thedivergence, relevance and the interaction between the two has an effect on attention and other dependent variables. Could I just remove divergence*relevance interaction out of the model and say that there was multicolinearity?
Also theoretically only divergence should have an impact on attention and not relevance.
--
View this message in context:
http://spssx-discussion.1045642.n5.nabble.com/Multicolinearity-tp5714614.htmlSent from the SPSSX Discussion mailing list archive at Nabble.com.
=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD