|
Not an SPSS question.
Of course you want to use forethought in making a scale. It sounds to me like your real problem is that you have too many chefs huddling over the stew -- Someone wants to toss in *every* possible item in order to make sure that their own spurned theory has some representation. Let's say you have 200 candidates that make up your early universe of items. Now you want to reduce that to 100, or to 50 or less. You still have too many for a pilot study, even after you drop the ambiguous ones. So, what do you keep? Certainly, you want to represent the dimensions that you *imagine* will be important. Depending on the aim of the study, you may or may *not* want to keep items for a dimension that someone else (or the literature) says are important. Think of what you will write up -- What is the basis of including items? What do you want to conclude? How defensive will you have to be, for what sort of critics? - You can't say that a dimension "does not exist" for your eventual subjects if you had no items representing it. -- Rich Ulrich Date: Tue, 23 Oct 2012 13:06:36 -0700 From: [hidden email] Subject: questionnaire design - dimensions before or after EFA? To: [hidden email]
|
In reply to this post by Anter
The simple answer to your question is that an EFA is not for confirming theory, its for development. That isn’t to say it isn’t frequently misused with the
intent of confirming a theory of dimensions, but that should be done with CFA, not EFA. EFA should be done on the items after data is collected and before any extensive work into the latent constructs is done. However, even that is a bit simplistic into
the process of measurement design. First you start, as Rich mentions, with a “universe” of possible items. Typically before even administering these items, a group of experts will vet the items
with the intent of coming up with the best items that fit with the intent of the measure. This is often referred to as the face validity phase. In many cases, this will be repeated after data collection has been completed and EFA develops a set of latent
constructs, but let’s hold off on that thought. After the Universe has been reduced down to a sampling of questions considered the best to be tested, you typically run these questions through a sample of
people. Often this initial draft of the survey or measure has many more questions than will ultimately be used (Though you do want to keep the items reduced to a reasonable number for people to fill out), as the next few stages will rely more on data analysis
to reduce the measure, not instinct, theory, or magic beans. Once the initial round of data collection is complete (presumably you have used a representative sample to what the measure will be used with), you run a series
of basic descriptive analysis. This is to ensure the data meets certain basic assumptions for correlations to hold up, as the EFA won’t necessarily be reliable otherwise. You may want to look at things like patterns of missingness as well at this stage,
as it may be pertinent to exclude any items with high missingness before proceeding, and possibly revisit those items if important. For instance, questions about sensitive topics are often skipped more than others, but the information may be important, so
asking in a different way may be necessary. Once the above is completed, you can run an EFA, and often it is easiest to interpret if you include a rotation. If the item correlations are relatively low,
then varimax is best. In fact, I tend to argue that a varimax should always be run first, but if there is also evidence for high item and construct correlation, a oblimin rotation may make more sense (Oblique rotation allows for the correlation of latent
constructs). Review the latent constructs developed in the above analysis, review the items contained within, ensure a minimum number of items are in the scales. My minimum
is 3 for smaller scales, and 5 for most others. In other words, if I only have a measure containing 10 items, I may consider 3 items to be acceptable for a scale, but typically like to see at least 5 items. If I have a set of items that could either make
up a construct of 8 items or a construct of 5 and 3, and I don’t feel they are obviously measuring something different, then I would often go for the 8.
At this stage I typically review the item correlations within the constructs. Large constructs containing more than 5 items can now have items removed which
show a high correlation to other items in the scale. My general rule is .7 or so, but this is really up to the reviewers. Just remember, just because someone likes or is attached to an item is no reason to keep it. Also, as far as I’m concerned, if two
items have a correlation of more than .85, then you really have no excuse for keeping both. They are practically measuring the same thing (you are talking about more than 70% common variance), and the wording really doesn’t matter anymore. I get this a lot
with people, they argue that theory suggests both are unique and important. That’s great, reality suggests they are the same thing. Anyway, point is, at this stage you have now come up with a set of mathematical latent constructs, and reduced them to the
most parsimonious variable sets. Then you have those question stragglers. Items that don’t load on the main factors (they will load on their own unique factors). These items should either
be eliminated, or they need to be fact based enough to stand on their own. For instance, if your goal is to measure someone’s purchasing perceptions or addictive personality, single items are not acceptable. If you are trying to measure the color of their
shoes, a single item is sufficient (I.e. you don’t need a latent construct to know a factual and directly measurable piece of information, i.e. a manifest variable). If there are single variables or items that should be part of a construct, and they don’t
hang together, my suggestion is to rewrite the items and try again. Once all of this is finished, I suggest a second round of face validity analysis. At this stage you have a group of experts (some would argue that a different
group of experts should be used than at the initial stage) to vet the constructs and their items. Essentially you want to see how well the items fit with the concepts developed for the latent constructs. At this stage you begin to develop the labels for
the constructs and be certain there is consensus on the constructs. Finally, once this is all done, you typically want to collect the information on a representative sample again, and this time run a CFA to confirm the findings
in the EFA. If this holds up, great. If not, you start over. Matthew J Poes Research Data Specialist Center for Prevention Research and Development University of Illinois 510 Devonshire Dr. Champaign, IL 61820 Phone: 217-265-4576 email:
[hidden email] From: SPSSX(r) Discussion [mailto:[hidden email]]
On Behalf Of Andra Th
|
|
Free forum by Nabble | Edit this page |