Login  Register

Re: basic que-Factor Analysis

Posted by Mehul Pajwani on Aug 21, 2011; 4:40am
URL: http://spssx-discussion.165.s1.nabble.com/basic-que-Factor-Analysis-tp4706531p4719751.html

The ultimate objective is to segment the respondents based on their perception about current and future economy and how they are/will be dealing with the changes in the economy.

I intend to define the broad dimensions based on the factor analysis and then to use the same for cluster analysis.

Yes, factors do make sense and are seem to be meaningful.

BTW, all these 14 factors have eigenvalues more than 1. Would parallel analysis help me reduce the number of variables loading under a given factor? Would it be a good idea to use the current factor solution?

Please let me know your thoughts on this

Thanks,

Mehul

On Sat, Aug 20, 2011 at 7:00 AM, Art Kendall <[hidden email]> wrote:
What are you using the factor analysis for? (to create a summative score?)  Is this a pre-existing instrument or did you create it? If you are creating summative scales did you remove "splitters" from the final key?


What did you use for a stopping rule? 

Do the factors make sense?


try running parallel analysis.  With moderate Ns (several hundred to a few thousand), I have usually retained the number of factors where the obtained eigenvalue is one more than what would come from random data or random permutations of the same data.
http://flash.lakeheadu.ca/~boconno2/nfactors.html



Art Kendall
Social Research Consultants

On 8/19/2011 5:20 PM, Mehul Pajwani wrote:
Makes sense!  Thanks very much to Frank, Farry and Rich!
 
 
Finally, i ran the factor analys and (Hope, I haven't missed anything) here is what I did.  I would appreciate if you can take a quick look and let me know if I have missed anything or could have done this in a better way.  I can send you my output in Excel or SPSS file, if required.
 
 
Iteration-1 (98 variables)
- Ran factor analysis with 98 items. Prior to performing PCA, the sutability of data for factor analysis was assessed. Inspection of r-matrix revealed the presence of many coefficients of 0.3 and above. KMO value was 0.961 and the Bartlett's Test of Sphericity as significant.
- Variables were validated based on communalities and 19 variables with loadings of less than 0.55 were dropped. 2nd Iteration was ran with 79 variables (98-19=79)
 
Iteration-2 (79 variables)
-Communalities- All 79 variables have communilities more than 0.50
-Measures of Sampling Adequacy (KMO- on the diagonal of the anti-image correlation matrix) was checked. KMO for all 79 variables were greater than 0.50.
-After removing variables with low communalities and checking KMOs, the factor solution was examined to remove any components that have only a single variable loading on them.  Component 13 was removed since it had only one variable. Componont 14 has no variables at all.
 
Iteration-3  (78 variables)
-Communalities- All 78 variables have communilities more than 0.50
- The measures of sampling adequacy (KMO) for all the variables are greater than 0.50. (actually, greater than 0.7, which is good)
- All 13 componants have 2 to 24 variables
 
Final Solution:
13 compononts with variables loadings as below. I got 24 variables for Componant-1 and 15 for compononant-2. Are these too many?
Componant  Number of Variables
1   -24
2   -15
3   -6
4   -7
5   -5
6   -6
7   -3
8   -2
9   -2
10  - 2
11  - 2
12  - 2
13  - 2
Total-    78
 
 


 
On Thu, Aug 18, 2011 at 6:30 PM, Rich Ulrich <[hidden email]> wrote:
"Failure to converge in 25 interations"  is not fatal.

It might be an indication that you need 50 or 100 iterations,
when two competing structures seem to  fit. 

Of course, it could also indicate that var 93 had a lot of
missing (as someone suggested), so that the analysis is a lot
less stable than the analysis that worked, using 92 vars.

--
Rich Ulrich