Login  Register

Re: EFA to CFA( was Re: )

Posted by Art Kendall on Nov 20, 2011; 8:48pm
URL: http://spssx-discussion.165.s1.nabble.com/no-subject-tp4997278p5008843.html

Good point. Unipolar (extent) scales are very useful. E.g., testing for unidimensionality can be very informative.
Sometimes scaling can also reveal when constructs that are considered single bipolar dimensions turn out to be two distinguishable dimensions. Bem found that conformity to masculine and feminine stereotype did not work as a single bipolar dimension. One can be low on both, high on both, or high on one and low on the other. This is analogous the the idea that position needs to be measured by both latitude and longitude.

Liberal-conservatism's 3 distinguishable factors are analogous to using longitude, latitude, and altitude.

The number of dimensions it takes to adequately measure a construct very much depends on the purposes of the research.

Art Kendall
Social Research Consultants




On 11/20/2011 12:18 PM, R B wrote:
Another, somewhat related point-- If one is truly interested in constructing an equal-interval measure (e.g., "ruler") of a unidimensional construct along a continuum ranging from the low end (e.g., low depression) through the upper end (e.g., severe depression) upon which both items and persons can be placed, Rasch modeling has been shown to prove quite useful.

By fitting a Rasch model, one may examine various critical aspects of an equal-interval measure. First off, you can map items and persons on the "ruler," so to speak, (a.k.a. Wright map) to see the distribution of persons across the continuum of "severity", determine how well the items are covering the continuum of "severity" (a.k.a. bandwidth; that is, do your items adequately cover the lower end of depression?). One can also examine the extent to which the construct is unidimensional, how well the current sample of items are fitting the model, determine whether there are some persons who are not being placed as would be expected based on the model and potentially figure out why (e.g., atypical response patterns such as endorsement of suicide yet not endorsing "less severe" items), whether there is differential item function depending on the population (e.g., males versus females), and the list goes on and on.

Ryan

On Sun, Nov 20, 2011 at 10:35 AM, Art Kendall <[hidden email]> wrote:
Varimax is used rather than promax when the goal is to have measures that give a good operational definition of a construct that distinguish related ideas.

Each item is considered an imperfect measure of a construct so we measure many times. We have a firmer idea of a construct when we see what is common to a set of items. An item has 3 parts: common variance, unique variance, and noise (aka error variance). When we construct scales we want to do summative scoring that retains the common variance. The unique variance is may be consistent sources that are related or or not.

One reason that it has long been conventional to have at least 50% more items written is that one wants divergent validity among the set of measures.
We assume that we are not perfect writers of items for constructs that are somewhat fuzzy even for the originator of the construct.
Often in practice we refine our constructs when we see how some items seem to be double barreled. When items are double barreled it becomes unclear just what is being measured. The next round of item writing attempts to find items that distinguish between scale measures so that the traits measured are divergent.

For instance in developing Lorr's liberalism-conservatism scale over time 3 factors arose. Lib-con was then used in analysis using three roughly uncorrelated scales as a set or using only that part of lib-con considered to some other phenomenon like favoring (or not) a particular bill.
1) general lib-con
2) equality
3) sexual freedom.

When I did my dissertation I added a few items that related to more current issues. Lo and behold endorsing "equal right for gay people" loaded on both equality and sexual freedom.

This is analogous to including subgroups of cases on only one side of a t-test Or to using orthogonal designs in experiments.

There may be some circumstances where the population of items is measured and one simply wants to to collapse data but is not in the process of measurement development. There may be no intent to see if different parts of a more general construct relate differently to other general constructs or if the parts of one construct relate differently to parts of another construct. In those circumstances there may be no need to have divergent measures.


Art Kendall
Social Research Consultants



On 11/18/2011 3:33 PM, Swank, Paul R wrote:

My experience and that of other’s (eg. Preacher and MacCallum) is that most factors in behavioral sciences and many in biomedical sciences are in fact correlated. And in that case, allowing the factors to be correlated gives a cleaner solution. When factors are truly correlated, forcing them to be uncorrelated is what gives crossloadings.

Paul

Dr. Paul R. Swank,

Children's Learning Institute

Professor, Department of Pediatrics, Medical School

Adjunct Professor, School of Public Health

University of Texas Health Science Center-Houston

From: Rich Ulrich [[hidden email]]
Sent: Friday, November 18, 2011 11:55 AM
To: Swank, Paul R; SPSS list
Subject: RE: EFA to CFA( was Re: )

[see below]


From: [hidden email]
To: [hidden email]; [hidden email]
Date: Fri, 18 Nov 2011 11:11:14 -0600
Subject: RE: EFA to CFA( was Re: )

How can you justify forcing factors to be uncorrelated if in fact they are correlated?

[snip previous; my recommendation of Varimax over Promax.]
- thanks for asking -


It works.

In more detail: It works a lot better than the alternatives.

Explanation: Since we are intending to compute factors from
items using equal weights, we will end up with factors that are
correlated, just as expected. And they will have just about as
much correlation as we expected, judging from my own experience.

If we start with an oblique rotation, we pile correlation (from selecting
items) on top of allowed-correlation (oblique solution). That is,
we are faced with factors that have a lot of double-loaded items.
Either we end up suffering from too much induced correlation from
using non-exclusive scale definitions; or else we drop entirely (as our
OP did) many items which are central to our universe of items.

The situation is different if we were preserving and using the
theoretical factors, but there are good reasons (interpretation,
generalizability) that that practice is rare.

If we look at the geometrical representation in plots, we will see
that a varimax solution does a pretty decent job of describing
factors as correlated as, say, 0.60. And varimax does a much
better job that any oblique solution in separating out the variables
into non-overlapping sets.

By the way, Promax is what I did most of my experimenting with,
after it seemed superior (to several other oblique rotations)
for the sort of scaled data I've regularly reduced to factors.

--
Rich Ulrich

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Art Kendall
Social Research Consultants