Amos Chi-Square interpretation

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Amos Chi-Square interpretation

Arno Haslberger
As a non-expert I was recently puzzled by a model statistic I found in an
article published in a respectable journal. The model the article was based
on had the following: Chi-Square = 1633.29 (486 df), p=.000. To my knowledge
this means that the model needs to be rejected. What am I missing? Under
what conditions would one accept and work with such a model? Thank you!

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Amos Chi-Square interpretation

John F Hall
Don't worry: p is the probability of obtaining a chi-square as large as that in that particular sample, ie infinitesimally small (not zero as SPSS only displays 3 decimal places, but close).  If this was a crosstab it's fairly safe to reject the null hypothesis that the two variables are statistically unrelated, but worth checking to see what happens when controlling for other variables.
----- Original Message -----
Sent: Wednesday, April 14, 2010 11:27 AM
Subject: Amos Chi-Square interpretation


As a non-expert I was recently puzzled by a model statistic I found in an
article published in a respectable journal. The model the article was based
on had the following: Chi-Square = 1633.29 (486 df), p=.000. To my knowledge
this means that the model needs to be rejected. What am I missing? Under
what conditions would one accept and work with such a model? Thank you!

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

Reply | Threaded
Open this post in threaded view
|

Re: Amos Chi-Square interpretation

Garry Gelade
In reply to this post by Arno Haslberger
Arno

With a large sample, there are always going to be significant discrepancies
between the model and the data. Therefore, in addition to reporting Chi-sq,
researchers often use other indicators of fit that are less influenced by
sample size. Eg in structural equation modelling such things as Comparative
Fit Index (CFI), Tucker Lewis Index (TLI), Goodness of Fit index (GFI).

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Arno Haslberger
Sent: 14 April 2010 10:28
To: [hidden email]
Subject: Amos Chi-Square interpretation

As a non-expert I was recently puzzled by a model statistic I found in an
article published in a respectable journal. The model the article was based
on had the following: Chi-Square = 1633.29 (486 df), p=.000. To my knowledge
this means that the model needs to be rejected. What am I missing? Under
what conditions would one accept and work with such a model? Thank you!

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Amos Chi-Square interpretation

statisticsdoc
In reply to this post by Arno Haslberger
Arno,
If the sample size in the analysis is large, the chi square goodness of fit
test may be significant even if the fit indices are very good (e.g. RMSEA,
GFI, CFI).  In this instance, more weight is given to the fit indices when
judging the adequacy of the model.
Best
Steve Brand

www.Statisticsdoc.com

---- Arno Haslberger <[hidden email]> wrote:

> As a non-expert I was recently puzzled by a model statistic I found in an
> article published in a respectable journal. The model the article was based
> on had the following: Chi-Square = 1633.29 (486 df), p=.000. To my knowledge
> this means that the model needs to be rejected. What am I missing? Under
> what conditions would one accept and work with such a model? Thank you!
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Amos Chi-Square interpretation

Mike
In reply to this post by Garry Gelade
I think that Arno was a little vague in his first post and it might be
easy for people to miss his mention of AMOS in the Subject line.
I believe that Arno was referring to a structural equation model (SEM),
as mentioned by Garry Gelade, and in this context the chi square
test is a "goodness of fit" test.  That is, given a covariance matrix
of variables, can the covariances be explained by specifying relationships
among the variables involved (both empirical/manifest variables and
latent variable or factors in the factor analysis sense).  If one has
correctly specified the relationships among the variables, then the
chi square comparing the observed covariance matrix with the model
impled matrix should be non-significantly different (i.e., the differences
are only due to sampling error).  If the model does not fit, then the
chi square will be statistically significant, implying that there are
significant discrepencies between the specified model and the observed
covariance matrix.  This appears to be the case presented by Arno.
The question is what to do in this type of situation.  A few points:

(1)  The model that one fits to a covariance matrix will often have
some theoretical foundation, such as how the empirical variables are
to be related to the latent variables.  Further, assumptions have to
be made about such things as whether errors are independent or
correlated  (which are calculated as part of the model) and what is
relevant population distribution (e.g., multivariate normal distributions).
Misspecification of any of these can lead to a significant chi square.
The problem is identifying where the misspecification is and whether
there is some solution that corrects the misspecification (e.g., some
errors are correlated and if the models specifies these correlated errors,
the fit might improve and even become nonsignificant -- it depends on
what one is modeling).  One can use modification indices to identify where
things in the model can be changed to fit the model but there is a
danger here:  one may have the wrong theory or assumptions but
one can use the modication indices to chnage the specified model
to fitting the observed covariance matrix as close as possible but one
should realize that one is only fitting the model to this sample -- if
one is changing the model just to fit this sample, then it is unlikely to
be replicated in other samples.

(2)  A large sample often provides sufficient statistical power
to detect small but statistically significant model specification errors.
Note, if your model is correct and your data have minimum error,
then your model will fit the data regardless of sample size (with
very large samples in the thousands it is possible to have chance
discrepencies that represent Type 1 errors but these should be
rare).  It is very hard to come up with a correct model to fit a
covariance matrix or, in other words, misspecification error is
hard to avoid.  But is the misspecification error due to something
of great theoretical importance (e.g., the theory says that there
should be 1 latent variable but a better model turns out to be
2 or 3 correlated latent variables -- something that theory might
prohibit) or is it due to something minor (a couple of error terms
are correlated).  Only by examining the model itself, where the
discrepencies are, and identifying the best fitting model (purely
as an exercise after one has tested one's original model and has
found it wanting -- one really has to ask oneself why is there a
discrepency between the theory and the observed data) can an
informed decision about the validity or, perhaps more importantly,
the usefulness of the theory and model.

(3)  I've seen instances in the research literarture where a structural
equation model has been fit but there are signficant discrepencies,
that is, the goodness of fit chi square is statistically significant.  Technically,
the the model/theory doesn't fit the data but the research may feel that
the model has "heuristic" value, that is, it allows one to organized the
relevant details about a phenomenon in a meaningful way.  One may
want to hold on to such a theory because, even if it is flawed or wrong,
there may be no other theory that provides such a useful framework
for thinking about the phenomena.  Perhaps the model can be "fixed"
to fit better but, depending upon what one is modeling, other variables
may need to be included or more reliable measures used or different types
of relationships have to be included even though the theory may not
allow for this.  But this might force the researcher to revise the theory
or his/her thinking about the phenomenon.  Box's comment on all
model are wrong but some are useful is relevant here.

(4)  It can be the case that a paper gets published and the reviewers
have inadequate background to correctly evaluate the analysis.  It's
one thing to submit an article, say, to the journal Structural Equation
Modeling where can expect to get hosed for even minor problems,
and some 2nrd or 3rd tier journal in a content area (true story:
I was co-author on a paper that used confirmatory factor analysis
to determine whether subscales on an instrument were unidimensional
or explained by a single factor or latent variable -- the methods reviewer
from the journal said he had never heard of confirmatory factor
analysis -- a warning sign about the journal and its competency to
evalaute the methodology of research submitted to it).

So, to answer the original question of "under what conditions would one
accept such a model?  It depends:

(a)  Even a misspecified model can have heuristic value especially if it
widely known in an area and there is no reasonable alternative to the
theory specifying the model.

(b) "Hardcore" SEM modelers would probably say one has a first
approximation to a model but if the misspecification is not due to
minor problems (e.g., correlated errors) one might have to re-think
the theory and consider which variables may have to added and
which variables might have to be dropped as well as changing the
configuration of relationships among them.

The latter camp might argue against publication of such "early work"
but if there is no other models in existence (someone has to come up
with the first model even if it likely to be seriously wrong) it might serve
as an incentive to others to work on improved models.  For a more detailed
discussion of these issues (as well as what goodness of fit indices or
measures to use), it might be worthwhile to check out the SEMNET-L
archives (WARNING:  there are ongoing wars about the right way
to do SEM); see:
http://bama.ua.edu/archives/semnet.html
as well as some of the research literature on misspecification problems
in SEM.

-Mike Palij
New York University
[hidden email]


----- Original Message -----
From: "Garry Gelade" <[hidden email]>
To: <[hidden email]>
Sent: Wednesday, April 14, 2010 8:24 AM
Subject: Re: Amos Chi-Square interpretation


> Arno
>
> With a large sample, there are always going to be significant discrepancies
> between the model and the data. Therefore, in addition to reporting Chi-sq,
> researchers often use other indicators of fit that are less influenced by
> sample size. Eg in structural equation modelling such things as Comparative
> Fit Index (CFI), Tucker Lewis Index (TLI), Goodness of Fit index (GFI).
>
> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
> Arno Haslberger
> Sent: 14 April 2010 10:28
> To: [hidden email]
> Subject: Amos Chi-Square interpretation
>
> As a non-expert I was recently puzzled by a model statistic I found in an
> article published in a respectable journal. The model the article was based
> on had the following: Chi-Square = 1633.29 (486 df), p=.000. To my knowledge
> this means that the model needs to be rejected. What am I missing? Under
> what conditions would one accept and work with such a model? Thank you!
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD