attachment0 (59K) Download Attachment |
Is this the syntax that can be downloaded from https://people.ok.ubc.ca/brioconn/nfactors/nfactors.html If not what filetype are the attachments? Art Kendall Social Research Consultants On 2/11/2011 8:41 AM, krisscot wrote: ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Art Kendall
Social Research Consultants |
In reply to this post by krisscot
Okay, it's been while since I've done this kind of review but
if I make any
mistakes I hope that more knowledgeable people will correct my
errors.
Here goes:
First, one way of thinking about factor analysis is that it
takes a square
matrix and tries to determine if there is a smaller matrix
that contains the
information of the larger matrix.
Consider a correlation matrix that has n variables. This
means we have
a n x n matrix. But, by definition, if the correlation
matrix has non-zero
off diagonal elements (i.e., non-zero correlations between
variables),
then information in one variable is "redundant" with
information in other
variables. The question now is can this matrix be
reduced to a smaller
number of rows/columns, say, an m x m (where m < n) where
the rows
and columns are independent and the off-diagonal elements are
all zero.
This number of row/columns in this smaller square matrix is
referred
to as the "rank" of the matrix. Old school factor
analysis can be thought
of as being concerned with finding the rank of a correlation
matrix;
the rank represents that number of "factors" needed to explain
the
original correlation matrix.
Principal factor analysis is one way to reduce a correlation
matrix to a
smaller matrix represented by independent factors. This
is done by
using a series of equations that relate the original matrix to
the reduced
matrix. A loading matrix relating the factors to
original data is estimated
(the Lambda or loadings of variables on factors) and the
eigenvalues are
part of the solution. If m=5, then the value of the
eigenvalues will be
one set of values because you are trying to calculating the
loading
for five factors. If you limit the number of factor to
m=3, the solutions
or eigenvalues will be different because you are now estimate
the
loading of variables on 3 factors.
In mathematical terms, eigenvalues greater than 1.00 should
represent
solutions to solving the set of simultaneous equations needed
to reduce
the correlation matrix to the factor matrix. Eigenvalues
less than 1.00
technically are really equal to zero. But correlation
matrices often will
behave badly (the pattern of interrelationships are far more
complex than
represented by the simple pairwise correlations, e.g, errors
in one variable
are correlated with errors in another variable), and the
estimation procedure
might go awry, which is why negative eigenvalues
(mathematically,
imaginary numbers are obtained as a solution) might
occur.
So, when engaged in exploratory factor analysis, you are
asking the program
what is the "rank" a correlation matrix can be reduced to,
that is, a smaller
square matrix where rows/columns are independent. The
quality of the
data in important. If you think you know what the rank
should be or, in other
words, the number of factors that underlie the variables you
have, then
you can specify the rank/number of factors. You will get
different estimates
depending upon how far from the "true" solution you are.
Unfortunately,
Principal factor can't be used to test factor models, all it
can do is try to
provide a solution under certain assumptions (e.g., pairwise
correlations
are the only relationships, errors are uncorrelated, the
factors are independent
or orthogonal, etc.).
Real life data and correlation matrices based on them is
likely to be more
complex which is why Structural Equation Modeling (SEM)
might be used
because you can explicitly identify how many factors there
should be,
whether they are independent or correlated, whether the
errors/unique
variances are constant and uncorrelated and so on.
But one would have
to take a course on SEM to fully understand
and appreciate these issues.
I hope I didn't much of the above wrong.
-Mike Palij
New York University
|
Free forum by Nabble | Edit this page |