Log-linear models: goodness-of-fit criteria & comparing alternative models

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Log-linear models: goodness-of-fit criteria & comparing alternative models

Schwarz,Paul
I am interested in using log-linear models to analyze responses from a
survey questionnaire. In particular I want to assess the relative
importance of different questions and to assess correlations between
questions. I was attracted to log-linear models because the data are
mostly categorical, and I didn't need to specify a response variable for
the models. Because the survey questions are grouped into sections,
log-linear models seemed like a reasonable, if not good, way to analyze
these data.

If SPSSX-L readers have other recommendations for this type of analysis,
I'd appreciate the ideas.

I have several questions, though, regarding log-linear models and how to
interpret the output from SPSS:

1) I have been using the likelihood ratio in the goodness-of-fit table
to assess model fit. Usually, both the likelihood ratio and the Pearson
chi-square statistics are consistent, e.g., both are non-significant.
Sometimes, however, they aren't (like the following):

Goodness-of-Fit Tests(a,b)
                        Value         df        Sig.
Likelihood Ratio              132.483   216     1.000
Pearson Chi-Square      1269.380        216      .000
a       Model: Poisson
b       Design: Constant + Q08A + Q08B + Q08C + Q08D + Q08A * Q08D +
Q08B * Q08D + Q08C * Q08D

What does it mean when the 2 tests are not consistent with one another?
Should I ignore the Pearson chi-square statistics and focus only on the
likelihood ratio?


2) I have been using the Z value in the parameter estimates table as a
measure of relative importance of a model term (both main effects and
interaction terms). As I understand, this is appropriate but are there
any caveats that I should know about?


3) Is there a way to directly compare the fit of 2 alternative models,
say before and after dropping an interaction term? The goodness-of-fit
tests compare the fit of a model with the saturated model, and I haven't
find a way to compare the fit of 2 alternative models. If both of the 2
models have a non-significant likelihood ratio test, then is the better,
more parsimonious model, simply the one with fewer terms? What about,
like the case above, where I dropped an interaction term and the
likelihood ratio test is still non-significant, but the Pearson
chi-square statistic went from 199 to the 1269 value above (and hence
became significant)?

Thanks to everyone for their time and patience with me.

-Paul Schwarz