Grad Pack missing values module compatibility with AMOS

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Grad Pack missing values module compatibility with AMOS

Paola Chivers-2

HI,

 

I am currently using the Grad Pack version of spss v.17.  It does not have the missing values module, and as a student in Australia I cannot purchase this add on module.

 

I am currently wanting to run an AMOS SEM but I have missing data – so ideally need to impute data using Bayesian imputation.  Can anyone tell me:

·         if I run the imputation module  through the missing values module (externally – using another computer with the full  version of spss 17) to create an imputed file, can I then use this imputed file in my grad pack version of SPSS v17  with AMOS (without the missing values module)?

 

Before someone asks why I don’t just use the ‘other’ computer for my analysis – I am an external student (2 1/2 hrs from campus – which has the full version) and need to be able to work on the SEM models long term from home (months) with the Grad Pack version only.

 

Any advice, or others experience would be appreciated.

 

Regards,

Paola

 

“Ours has become a time-poor society, fatigued by non-physical demands and trying to compartmentalize daily living tasks.  It is small wonder that physical activity is discarded in this environment” p126 (Steinbeck, 2001)

 

P Please consider the environment before printing this email.

 

 

Reply | Threaded
Open this post in threaded view
|

Re: Grad Pack missing values module compatibility with AMOS

Jill Adelson
Hi Paola,
 
I've never used the Missing Values Analysis pack with Amos, only with SPSS. Once you use MVA, you retain the original data set and get x number of new sets (depending on how many imputations you want). You can use that data with any version of SPSS. However, SPSS with MVA will give you analysis for each imputation and pooled tests. If you run SPSS without MVA, you would need to filter out the original data and then your tests would all look like you have a lot more data than you do as it would just combine all the imputed data sets. I do not know if it is the same case with Amos or not.
 
Hope that helps,
Jill

On Tue, May 19, 2009 at 4:34 AM, Paola Chivers <[hidden email]> wrote:

HI,

 

I am currently using the Grad Pack version of spss v.17.  It does not have the missing values module, and as a student in Australia I cannot purchase this add on module.

 

I am currently wanting to run an AMOS SEM but I have missing data – so ideally need to impute data using Bayesian imputation.  Can anyone tell me:

·         if I run the imputation module  through the missing values module (externally – using another computer with the full  version of spss 17) to create an imputed file, can I then use this imputed file in my grad pack version of SPSS v17  with AMOS (without the missing values module)?

 

Before someone asks why I don’t just use the ‘other’ computer for my analysis – I am an external student (2 1/2 hrs from campus – which has the full version) and need to be able to work on the SEM models long term from home (months) with the Grad Pack version only.

 

Any advice, or others experience would be appreciated.

 

Regards,

Paola

 

“Ours has become a time-poor society, fatigued by non-physical demands and trying to compartmentalize daily living tasks.  It is small wonder that physical activity is discarded in this environment” p126 (Steinbeck, 2001)

 

P Please consider the environment before printing this email.

 

 


Reply | Threaded
Open this post in threaded view
|

Classification: LPA and cluster analysis

Dale Glaser
Good evening all.......I would be interested in gathering your insights into classification differences in hierarchal and nonhierarchical cluster analysis vs. latent profile analysis.  I obtained (n = 111) a 2-class model using Mplus (8 continuous level predictors), and decided to compare the classification with cluster analysis in SPSS.  Using the k-means QuickCluster option and constraining to 2-cluster solution, the classification results were very similar to the latent profile analysis.  However, using the hierarchical approach (with Euclidian distance measure and average linkage method) essentially a one-cluster solution results.  I was searching some texts/aritcles today trying to find out why there may be congruity  between the finite mixture modeling and nonhierarchial cluster analysis methods but not necessarily so with the hierarchical approach, but I couldn't find any sources.  One of my multivariate texts did state that based on the seed/type of partitioning as well as type of clustering algorithm, it may not be atypical to have discordance between the two types of clustering methods (hierarchical vs. nonhierarchical) so I wonder if this extends to finite mixture modeling?
 
Any insights would be most appreicated.  thank you...............
 
Dale

Dale Glaser, Ph.D.
Principal--Glaser Consulting
Lecturer/Adjunct Faculty--SDSU/USD/AIU
President, San Diego Chapter of
American Statistical Association
3115 4th Avenue
San Diego, CA 92103
phone: 619-220-0602
fax: 619-220-0412
email: [hidden email]
website: www.glaserconsult.com
 
Reply | Threaded
Open this post in threaded view
|

Re: Classification: LPA and cluster analysis

Hector Maletta

With hierarchical cluster analyhsis (CLUSTER command in SPSS) there is no single solution: the procedure starts with N clusters of 1 member each, and finishes with one cluster including all N cases as members. Therefore it is not surprising that you found hierarchical cluster to come up with a “one cluster solution”: that is just the last step in the procedure, not “the solution”. What you have to do next is examine the various “solutions”, with 1 to N clusters including all the intermediate results (the penultimate one was a solution with two clusters) to see whether any of them is of your liking. Remember, in all this, that clustering is not a parametric but a heuristic procedure. There is no “correct” solution. You can check, externally, which clustering solution is better for your particular purposes. For instance, if you are interested in some particular criterion, and we seek forming clustering that are maximally homogeneous internally, and maximally distinct between them, in some other variable, you can use one-way ANOVA with different clustering solutions to see which is best for that purpose. Likewise, if you want to have a moderate number of clusters, from 2 to six say, you can restrict yourself to those “solutions” and try to choose the one you judge the best.

As each procedure uses a different algorithm to include or exclude cases in/from clusters, it is not surprising either that solutions are not necessarily coincident case by case. Even within the same procedure, say Hierarchical or quick cluster, using different criteria may end up with different clustering decisions for specific cases. Such is the nature of clustering.

 

Hector

 


From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Dale Glaser
Sent: 20 May 2009 01:09
To: [hidden email]
Subject: Classification: LPA and cluster analysis

 

Good evening all.......I would be interested in gathering your insights into classification differences in hierarchal and nonhierarchical cluster analysis vs. latent profile analysis.  I obtained (n = 111) a 2-class model using Mplus (8 continuous level predictors), and decided to compare the classification with cluster analysis in SPSS.  Using the k-means QuickCluster option and constraining to 2-cluster solution, the classification results were very similar to the latent profile analysis.  However, using the hierarchical approach (with Euclidian distance measure and average linkage method) essentially a one-cluster solution results.  I was searching some texts/aritcles today trying to find out why there may be congruity  between the finite mixture modeling and nonhierarchial cluster analysis methods but not necessarily so with the hierarchical approach, but I couldn't find any sources.  One of my multivariate texts did state that based on the seed/type of partitioning as well as type of clustering algorithm, it may not be atypical to have discordance between the two types of clustering methods (hierarchical vs. nonhierarchical) so I wonder if this extends to finite mixture modeling?

 

Any insights would be most appreicated.  thank you...............

 

Dale

Dale Glaser, Ph.D.
Principal--Glaser Consulting
Lecturer/Adjunct Faculty--SDSU/USD/AIU
President, San Diego Chapter of
American Statistical Association
3115 4th Avenue
San Diego, CA 92103
phone: 619-220-0602
fax: 619-220-0412
email: [hidden email]
website: www.glaserconsult.com

 

 

Reply | Threaded
Open this post in threaded view
|

Re: Classification: LPA and cluster analysis

Art Kendall
In reply to this post by Dale Glaser
All forms of clustering are exploratory.  In my experience, different proximity measures and different agglomeration methods yield different results.  This has been true whether clustering types of classrooms, poverty of county pops, congressional districts, schizophrenics, music preferences, etc. That is why since the early 70's I have used multiple clustering methods.  I treat cases that are clustered together by several methods as "cores". That often leaves some cases unclustered.  I then iteratively use the classification phase of DISCRIMINANT to find cases that are far from the center of the assigned cluster or assigned to a cluster other than the expected cluster.  Those cases are then considered unclassified going into the next round of DFA.

I.  Some hierarchical techniques show a measure of "error"  which sometimes shows a jump when very disparate cluster are joined. In the past those jumps and interpretability were the criteria for the ballpark number of clusters.  These days the AIC and or BIC from TWOSTEP can also be used to ballpark the number of cores to retain.

WRT the single link procedure, it will show in your instance all clusterings from 111 down to one.  Looking at the plots often gives a guess about the number to retain.


IIRC about 20 or so years ago someone at a Classification Society meeting pointed out that Latent Profile Analysis was the same as some other clustering approach but I do no recall which.  If not in details, it still shares the goal of other non-hierarchical approaches of finding a single nominal level variable that describes a set of profiles were cases have similar values within a cluster and dissimilar values between the centroids of the clusters.  Methods differ on how much they emphasize shape, elevation, or scatter of profiles.  They also differ on the degree to which they tend to produce stringy or compact clusters.

The Classification Society is for people interested in such issues.
http://thames.cs.rhul.ac.uk/~fionn/classification-society/

to ask from more info you might want to post to this list.

http://lists.sunysb.edu/index.cgi?A0=CLASS-L

Art Kendall
Social Research Consultants

Dale Glaser wrote:
Good evening all.......I would be interested in gathering your insights into classification differences in hierarchal and nonhierarchical cluster analysis vs. latent profile analysis.  I obtained (n = 111) a 2-class model using Mplus (8 continuous level predictors), and decided to compare the classification with cluster analysis in SPSS.  Using the k-means QuickCluster option and constraining to 2-cluster solution, the classification results were very similar to the latent profile analysis.  However, using the hierarchical approach (with Euclidian distance measure and average linkage method) essentially a one-cluster solution results.  I was searching some texts/aritcles today trying to find out why there may be congruity  between the finite mixture modeling and nonhierarchial cluster analysis methods but not necessarily so with the hierarchical approach, but I couldn't find any sources.  One of my multivariate texts did state that based on the seed/type of partitioning as well as type of clustering algorithm, it may not be atypical to have discordance between the two types of clustering methods (hierarchical vs. nonhierarchical) so I wonder if this extends to finite mixture modeling?
 
Any insights would be most appreicated.  thank you...............
 
Dale

Dale Glaser, Ph.D.
Principal--Glaser Consulting
Lecturer/Adjunct Faculty--SDSU/USD/AIU
President, San Diego Chapter of
American Statistical Association
3115 4th Avenue
San Diego, CA 92103
phone: 619-220-0602
fax: 619-220-0412
email: [hidden email]
website: www.glaserconsult.com
 
Art Kendall
Social Research Consultants