|
Hey guys, I'm having some trouble with my conjoint analysis.
Used a full-profile ratings based conjoint, with 6 variables each with 2 levels. Using a fractional factorial design, I got a orthogonal array with 8 profiles plus I created 2 holdout cases. Each respondent was asked to rate the 10 different profiles with one question. 236 respondents completed the questionnaire. So I have ran the conjoint analysis, which gave me different outputs (e.g. model description, utilities, importance values and correlations). I know that the correlations table provides info about the relation between the observed utilities values and the expected values. Coefficients larger than .70 are good, right? I also understand that the holdout cases coefficients can be used as a check on the validity of the estimated utilities. I read in the SPSS conjoint manual: In many conjoint analyses, the number of parameters is close to the number of profiles rated, which will artificially inflate the correlation between observed and estimated scores. In these cases, the correlations for the holdout profiles may give a better indication of the fit of the model. Keep in mind, however, that holdouts will always produce lower correlation coefficients. Strangely my output Correlations(a) Value Sig. Pearson's R ,986 ,000 Kendall's tau ,786 ,003 Kendall's tau for Holdouts 1,000 . a Correlations between observed and estimated preferences My questions: 1) Does somebody know how this is possible, this 1,00 value for kendall's tau for holdouts and a blank/dot significance value, when this correlation coefficient is supposed to be lower than the others. Does this value indicate a bad fit of the model? or can no inferences be made based on this result. In other words, what might be the reason that SPSS is not able to calculate a Kendall's tau for holdout scores with sig. value 2) How can I use the holdoutcases/or holdout utilities to determine the validity of the model. Do I simply correlate/bivariate/thick pearson and take the actual ratings and the utility scores? 3) Since I have one dependent variable (a rating question (how likely is it that you will join this program) for each scenario) how can I determine the reliability of this variable? Does it for instance make sense to do a reliability analysis on the ratings of the 10 scenarios, and use the Cronbachs Alpha as an indicator of the reliability of the dependent variable measured with a 5 point likert scale. Thanks in advance for any input ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
| Free forum by Nabble | Edit this page |
