|
HI all,
I am having trouble running a multilevel model using "mixed linear models" in SPSS v14, and I'm wondering if someone might have some insight into the problem. I used the following syntax to test an unconditional multilevel model in which individuals are nested within cases (the cases are conflict resolution processes facilitated by at least one "neutral"), as noted in SUBJECT(CASEID) below. The outcome variable is a measure of the quality of the final agreement reached during the process: MIXED QUALITY /METHOD = REML /PRINT = SOLUTION TESTCOV /FIXED = | SSTYPE(1) /RANDOM = INTERCEPT | SUBJECT(CASEID) COVTYPE(UN). The output gives me the following: *Estimates of Covariance Parameters(a)* * * Parameter Estimate Std. Error Wald Z Sig. 95% Confidence Interval Lower Bound Upper Bound Residual .180410 .012723 14.180 .000 .157121 .207152 Intercept [subject = CASEID] Variance .276698 .063378 4.366 .000 .176619 .433486 a Dependent Variable: QUALITY. As I understand it, the Residual estimate gives me the explainable within case variance, and the Intercept estimate gives me the explainable between case variance for this outcome variable. In Judith Singer's paper in which she discusses the output from a multilevel model using PROC MIXED in SAS, she states that this between group variance should function as the ceiling of explainable between group variance for this outcome. So here's my problem. When I specify the "full model," in which 12 predictors are entered as FIXED EFFECTS ONLY (the only random effect in the model is still the intercept), the between groups variance parameter estimate INCREASES substantially (from .277 in the above table to .587 in the full fixed effects model, as pasted below). This baffles me, because as I add explanatory variables to the model, the explainable between groups variance estimate ought to be DECREASING, or if it's an awful model, ought to remain unchanged. So why would the between cases variance actually increase in the conditional model when the unconditional model ought to be giving me the ceiling estimate for that variance parameter? Here's the syntax I used, and the output, for the full fixed effects model (there are 12 predictors after the "with" statement): MIXED QUALITY WITH M7 COMPLEX c_Q13A c_appro c_engage c_medskl c_relinf c_effec c_othrvw c_Q15E c_Q14G c_Q14H /METHOD = REML /PRINT = SOLUTION TESTCOV /FIXED = M7 COMPLEX c_Q13A c_appro c_engage c_medskl c_relinf c_effec c_othrvw c_Q15E c_Q14G c_Q14H | SSTYPE(1) /RANDOM = INTERCEPT | SUBJECT(CASEID) COVTYPE(UN). The output that concerns me: *Estimates of Covariance Parameters(a)* * * Parameter Estimate Std. Error Wald Z Sig. 95% Confidence Interval Lower Bound Upper Bound Residual .107862 .010415 10.356 .000 .089264 .130336 Intercept [subject = CASEID] Variance .587327 .146167 4.018 .000 .360614 .956571 a Dependent Variable: QUALITY. I appreciate any help anyone can offer! Thanks very much, Katherine McKnight |
|
Normally, I believe, one would specify the higher level variable as
random (caseid) and then the residual would be subjects within caseid. Secondly, REML maximizes likelyhood for the random effects, not the fixed effects. If you wish to see the effects of these covariates on the variances, you should use ML instead. and then, it can happen that the variances increase when you add variables to the model. See Snijders T. & Bosker, R. (1999). Multilevel Analysis: An introduction to basic and advanced multilevel modeling. Thousand Oaks, CA:Sage. Paul R. Swank, Ph.D. Professor Director of Reseach Children's Learning Institute University of Texas Health Science Center-Houston -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Kathy McKnight Sent: Wednesday, May 02, 2007 5:09 PM To: [hidden email] Subject: strange multilevel model problem HI all, I am having trouble running a multilevel model using "mixed linear models" in SPSS v14, and I'm wondering if someone might have some insight into the problem. I used the following syntax to test an unconditional multilevel model in which individuals are nested within cases (the cases are conflict resolution processes facilitated by at least one "neutral"), as noted in SUBJECT(CASEID) below. The outcome variable is a measure of the quality of the final agreement reached during the process: MIXED QUALITY /METHOD = REML /PRINT = SOLUTION TESTCOV /FIXED = | SSTYPE(1) /RANDOM = INTERCEPT | SUBJECT(CASEID) COVTYPE(UN). The output gives me the following: *Estimates of Covariance Parameters(a)* * * Parameter Estimate Std. Error Wald Z Sig. 95% Confidence Interval Lower Bound Upper Bound Residual .180410 .012723 14.180 .000 .157121 .207152 Intercept [subject = CASEID] Variance .276698 .063378 4.366 .000 .176619 .433486 a Dependent Variable: QUALITY. As I understand it, the Residual estimate gives me the explainable within case variance, and the Intercept estimate gives me the explainable between case variance for this outcome variable. In Judith Singer's paper in which she discusses the output from a multilevel model using PROC MIXED in SAS, she states that this between group variance should function as the ceiling of explainable between group variance for this outcome. So here's my problem. When I specify the "full model," in which 12 predictors are entered as FIXED EFFECTS ONLY (the only random effect in the model is still the intercept), the between groups variance parameter estimate INCREASES substantially (from .277 in the above table to .587 in the full fixed effects model, as pasted below). This baffles me, because as I add explanatory variables to the model, the explainable between groups variance estimate ought to be DECREASING, or if it's an awful model, ought to remain unchanged. So why would the between cases variance actually increase in the conditional model when the unconditional model ought to be giving me the ceiling estimate for that variance parameter? Here's the syntax I used, and the output, for the full fixed effects model (there are 12 predictors after the "with" statement): MIXED QUALITY WITH M7 COMPLEX c_Q13A c_appro c_engage c_medskl c_relinf c_effec c_othrvw c_Q15E c_Q14G c_Q14H /METHOD = REML /PRINT = SOLUTION TESTCOV /FIXED = M7 COMPLEX c_Q13A c_appro c_engage c_medskl c_relinf c_effec c_othrvw c_Q15E c_Q14G c_Q14H | SSTYPE(1) /RANDOM = INTERCEPT | SUBJECT(CASEID) COVTYPE(UN). The output that concerns me: *Estimates of Covariance Parameters(a)* * * Parameter Estimate Std. Error Wald Z Sig. 95% Confidence Interval Lower Bound Upper Bound Residual .107862 .010415 10.356 .000 .089264 .130336 Intercept [subject = CASEID] Variance .587327 .146167 4.018 .000 .360614 .956571 a Dependent Variable: QUALITY. I appreciate any help anyone can offer! Thanks very much, Katherine McKnight |
|
In reply to this post by Kathy McKnight
Hi there Kathy,
The best sources that will answer YOUR specific questions are Snijders TAB and Bosker RJ (1994) Modeled variance in two-level models. Sociological Methods and Research, 22: 342-363 You can also find very compelling explanations in a book by the same authors-Multilevel analysis: an introduction to basic and advanced modelling (1999). Furthermore, Kreft and de Leeuw (1998) have the same (although less "technical") discussion in one of their later chapters. Interestingly, both Raudenbusch and Bryk (2002) and Singer and Willet (2003) touch on these same issues. In a nutshell: while the population within-variance is approximated well by the within-variance in samples, the population between-variance component is more complicated because it includes both between variance as measured in samples and a bit of within-sampling variance (or more broadly sampling error). This means that there is a fair bit of confounding in the estimation of the between-variance. Thus, if you introduce a level 2 variable, it will not impact on the residual level 1 variance but will reduce the (intercept and slope) variances if your model is so specified. This is logical and will lead to a reduction of level 2 variance(s). However, if you introduce a variable at level 1 of your model, which does not have a between-variance component, your level 1 residual variance should decrease (which is logical), but your level 2 variance will increase. The level 2 variance is unaffected by the introduction of a pure level 1 variable BUT BECAUSE THE LEVEL 1 VARIANCE IS DECREASED , the level 2 variance is forced to increase precisely because of the confounding that I have alluded to (if a quantity stays unchanged and you decrease one of its components, by definition the other component must increase). Very clumsy explanation, but I cannot do better right now as I am still digesting all this literature myself. So I am sure there are others who can offer better (and perhaps more technically correct) explanations and ones couched in clearer language. But I have tried-hope this does more good than harm. Thanks, Russell -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Kathy McKnight Sent: 03 May 2007 12:09 To: [hidden email] Subject: strange multilevel model problem HI all, I am having trouble running a multilevel model using "mixed linear models" in SPSS v14, and I'm wondering if someone might have some insight into the problem. I used the following syntax to test an unconditional multilevel model in which individuals are nested within cases (the cases are conflict resolution processes facilitated by at least one "neutral"), as noted in SUBJECT(CASEID) below. The outcome variable is a measure of the quality of the final agreement reached during the process: MIXED QUALITY /METHOD = REML /PRINT = SOLUTION TESTCOV /FIXED = | SSTYPE(1) /RANDOM = INTERCEPT | SUBJECT(CASEID) COVTYPE(UN). The output gives me the following: *Estimates of Covariance Parameters(a)* * * Parameter Estimate Std. Error Wald Z Sig. 95% Confidence Interval Lower Bound Upper Bound Residual .180410 .012723 14.180 .000 .157121 .207152 Intercept [subject = CASEID] Variance .276698 .063378 4.366 .000 .176619 .433486 a Dependent Variable: QUALITY. As I understand it, the Residual estimate gives me the explainable within case variance, and the Intercept estimate gives me the explainable between case variance for this outcome variable. In Judith Singer's paper in which she discusses the output from a multilevel model using PROC MIXED in SAS, she states that this between group variance should function as the ceiling of explainable between group variance for this outcome. So here's my problem. When I specify the "full model," in which 12 predictors are entered as FIXED EFFECTS ONLY (the only random effect in the model is still the intercept), the between groups variance parameter estimate INCREASES substantially (from .277 in the above table to .587 in the full fixed effects model, as pasted below). This baffles me, because as I add explanatory variables to the model, the explainable between groups variance estimate ought to be DECREASING, or if it's an awful model, ought to remain unchanged. So why would the between cases variance actually increase in the conditional model when the unconditional model ought to be giving me the ceiling estimate for that variance parameter? Here's the syntax I used, and the output, for the full fixed effects model (there are 12 predictors after the "with" statement): MIXED QUALITY WITH M7 COMPLEX c_Q13A c_appro c_engage c_medskl c_relinf c_effec c_othrvw c_Q15E c_Q14G c_Q14H /METHOD = REML /PRINT = SOLUTION TESTCOV /FIXED = M7 COMPLEX c_Q13A c_appro c_engage c_medskl c_relinf c_effec c_othrvw c_Q15E c_Q14G c_Q14H | SSTYPE(1) /RANDOM = INTERCEPT | SUBJECT(CASEID) COVTYPE(UN). The output that concerns me: *Estimates of Covariance Parameters(a)* * * Parameter Estimate Std. Error Wald Z Sig. 95% Confidence Interval Lower Bound Upper Bound Residual .107862 .010415 10.356 .000 .089264 .130336 Intercept [subject = CASEID] Variance .587327 .146167 4.018 .000 .360614 .956571 a Dependent Variable: QUALITY. I appreciate any help anyone can offer! Thanks very much, Katherine McKnight |
| Free forum by Nabble | Edit this page |
