|
Hi,
I want to analyse attitude survey data (a 75 item survey, using a Likert scale). The survey has 7 subscales (produced by summated item scores). I wish to test for group differences (5 groups). Usually, one might apply MANOVA /ANOVA,but K-S tests suggest the scale distributions are non-normal, and so ANOVA seems inappropriate. Kruskal-Wallis is a non-parametric equivalent, but the problem is that apparently the set of measures come from the same set of persons and so are not independent (ie each respondent responds to all subscales). Another option I looked at is the Friedman test, but this seems more designed for repeated measures - type designs. Has anyone else come across this problem in analysis of survey results - it would seem a situation that would often arise â i.e. a survey with subscales, and wanting to look at group differences? I am puzzled that what seems a common situation does not seem to have an obvious solution! Please can you advise? thanks, Clive. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
Hi Clive
There are a number of issues here. From what you write below I assume you want to test between independent groups, not between variables. i.e. you have seven scales V1 to V7, and for each you want to see if they differ across the 5 groups. Assuming your data is arranged so you have one row per distinct respondent and one column for each of the 7 scale sums, then a Kruskal-Wallis test is fine. The fact that each of your respondents have responded to multiple scales is not an issue, since you are testing one scale at a time. An alternative may be to transform your scale scores so they more closely resemble a Normal dit - e.g., if they are currently positively skewed, a log transformation may do the trick. then you could use a one-way ANOVA for the test. You will most likely want to adjust the significance level used to reflect multiple testing, or if the scales are naturally and statistically related, run a single MANOVA instead of seven ANOVAs. If you are using the sum of items to compute the overall scale score be VERY careful - if someone has missed out an item, then you will effectively be scoring that item as 0 is you use the 'sum' function to calculate the overall score, as opposed to item1 + item2 + , UNLESS you specify sum.X where X is the number of items in the scale. To avoid diminished sample size by attrition, it may be better to use the mean score if a few respondents have missed out a few items at random. cheers Chris "Figure It Out Statistical Consultancy and Training Service for Social Scientists Visit www.figureitout.org.uk for details of my consultancy services and forthcoming training courses in November 2008 Dr Chris Stride, C. Stat, Statistician, Institute of Work Psychology, University of Sheffield Telephone: 0114 2223262 Fax: 0114 2727206 gureitout.org.uk Quoting Clive Downs <[hidden email]>: > Hi, > > I want to analyse attitude survey data (a 75 item survey, using a Likert > scale). The survey has 7 subscales (produced by summated item scores). I > wish to test for group differences (5 groups). > > Usually, one might apply MANOVA /ANOVA,but K-S tests suggest the scale > distributions are non-normal, and so ANOVA seems inappropriate. > > Kruskal-Wallis is a non-parametric equivalent, but the problem is that > apparently the set of measures come from the same set of persons and so are > not independent (ie each respondent responds to all subscales). > > Another option I looked at is the Friedman test, but this seems more > designed for repeated measures - type designs. > > Has anyone else come across this problem in analysis of survey results - it > would seem a situation that would often arise â i.e. a survey with > subscales, and wanting to look at group differences? I am puzzled that what > seems a common situation does not seem to have an obvious solution! > > Please can you advise? > > thanks, > > Clive. > > ===================== > To manage your subscription to SPSSX-L, send a message to > [hidden email] (not to SPSSX-L), with no body text except the > command. To leave the list, send the command > SIGNOFF SPSSX-L > For a list of commands to manage subscriptions, send the command > INFO REFCARD > ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
In reply to this post by Clive Downs
Well developed scales are frequently *not severely discrepant* from
normal. Double check data entry and double check scale construction using RELIABILITY. The assumption of normality has to do with the residuals, not the raw data. Try the GLM procedure and take a look at the different case wise residual variables you can get. If you RANK the DV' you could see if the GLM on ranked leads to different conclusions. If it does, see if you can come up with a sensible model that includes transformed variables. *IFF *you can come up with a theory that involves transformations you could try that. Otherwise, just live with the results. Art Kendall Social Research Consultants Clive Downs wrote: > Hi, > > I want to analyse attitude survey data (a 75 item survey, using a Likert > scale). The survey has 7 subscales (produced by summated item scores). I > wish to test for group differences (5 groups). > > Usually, one might apply MANOVA /ANOVA,but K-S tests suggest the scale > distributions are non-normal, and so ANOVA seems inappropriate. > > Kruskal-Wallis is a non-parametric equivalent, but the problem is that > apparently the set of measures come from the same set of persons and so are > not independent (ie each respondent responds to all subscales). > > Another option I looked at is the Friedman test, but this seems more > designed for repeated measures - type designs. > > Has anyone else come across this problem in analysis of survey results - it > would seem a situation that would often arise âEUR" i.e. a survey with > subscales, and wanting to look at group differences? I am puzzled that what > seems a common situation does not seem to have an obvious solution! > > Please can you advise? > > thanks, > > Clive. > > ===================== > To manage your subscription to SPSSX-L, send a message to > [hidden email] (not to SPSSX-L), with no body text except the > command. To leave the list, send the command > SIGNOFF SPSSX-L > For a list of commands to manage subscriptions, send the command > INFO REFCARD > > > ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Art Kendall
Social Research Consultants |
|
I would add to this that tests of normality, like tests of any other hypothesis tend to be more powerful with large sample sizes. Of course, this is precisely when the assumption of non-normality is least problematic.
Paul R. Swank, Ph.D Professor and Director of Research Children's Learning Institute University of Texas Health Science Center Houston, TX 77038 -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Art Kendall Sent: Friday, October 10, 2008 6:32 AM To: [hidden email] Subject: Re: Survey data analysis: subscales and group differences Well developed scales are frequently *not severely discrepant* from normal. Double check data entry and double check scale construction using RELIABILITY. The assumption of normality has to do with the residuals, not the raw data. Try the GLM procedure and take a look at the different case wise residual variables you can get. If you RANK the DV' you could see if the GLM on ranked leads to different conclusions. If it does, see if you can come up with a sensible model that includes transformed variables. *IFF *you can come up with a theory that involves transformations you could try that. Otherwise, just live with the results. Art Kendall Social Research Consultants Clive Downs wrote: > Hi, > > I want to analyse attitude survey data (a 75 item survey, using a Likert > scale). The survey has 7 subscales (produced by summated item scores). I > wish to test for group differences (5 groups). > > Usually, one might apply MANOVA /ANOVA,but K-S tests suggest the scale > distributions are non-normal, and so ANOVA seems inappropriate. > > Kruskal-Wallis is a non-parametric equivalent, but the problem is that > apparently the set of measures come from the same set of persons and so are > not independent (ie each respondent responds to all subscales). > > Another option I looked at is the Friedman test, but this seems more > designed for repeated measures - type designs. > > Has anyone else come across this problem in analysis of survey results - it > would seem a situation that would often arise âEUR" i.e. a survey with > subscales, and wanting to look at group differences? I am puzzled that what > seems a common situation does not seem to have an obvious solution! > > Please can you advise? > > thanks, > > Clive. > > ===================== > To manage your subscription to SPSSX-L, send a message to > [hidden email] (not to SPSSX-L), with no body text except the > command. To leave the list, send the command > SIGNOFF SPSSX-L > For a list of commands to manage subscriptions, send the command > INFO REFCARD > > > ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
In reply to this post by Clive Downs
Swank, Paul R [hidden email]> wrote:
I would add to this that tests of normality, like tests of any other hypothesis tend to be more powerful with large sample sizes. Of course, this is precisely when the assumption of non-normality is least problematic. Paul R. Swank, Ph.D Professor and Director of Research Children's Learning Institute University of Texas Health Science Center Houston, TX 77038 Hi Paul, Thank you for the further comment on this topic. In this context, may I ask what sample size you would consider as large? Thank you, Regards Clive. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
Certainly, with hundreds of subjects this is a problem. It is of course,
difficult to nail down a single number since different tests have different degrees of power. I just ran a little simulation (1000 replications) with two variables, x1 and x2 where x1 was based on a sum of binomial random numbers with k (trials) = 20 and p = .45 and x2 was the same except k = 20 and p = .30. The skewness for x1 was .06 and for x2 = .20. I generated two samples of each, one with an n = 30 and one where n = 100. The results indicated that for x1 with n = 30, the various tests (Anderson-Darling, Cramer-von Mises, Kolmogorov-Smirnov, and the Shapiro-Wilks) were significant from 14.4% for Shapiro-Wilks to 37.7% for Kolmogorov-Smirnov but with n=100, it ranged from 71.2% for Shapiro-Wilks to 99.9% for Cramer-von Mises. For x2 with n=30, the tests were significant from 21.7% for Shapiro-Wilks to 46.5% for Kolmogorov-Smirnov. With n=100, they were 100% for all except Shapiro-Wilks which was 91.4%. So none of the tests were very good at detecting skewness when n was 30 for either distribution, but all were quite powerful when n=100. Sp even when the departure from normality is very slight (mean of 9 out of 20 points), samples of size 100 would detect this 99% of the time for all tests except the Shapiro-Wilks and it would detect it 71% of the time. And some would argue that a skew of .20 is nothing much to worry about but with an n of 100, all the tests would label the distribution as non-normal more than 90% of the time. Paul R. Swank, Ph.D Professor and Director of Research Children's Learning Institute University of Texas Health Science Center Houston, TX 77038 -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Clive Downs Sent: Friday, October 17, 2008 5:34 AM To: [hidden email] Subject: Re: Survey data analysis: subscales and group differences Swank, Paul R [hidden email]> wrote: I would add to this that tests of normality, like tests of any other hypothesis tend to be more powerful with large sample sizes. Of course, this is precisely when the assumption of non-normality is least problematic. Paul R. Swank, Ph.D Professor and Director of Research Children's Learning Institute University of Texas Health Science Center Houston, TX 77038 Hi Paul, Thank you for the further comment on this topic. In this context, may I ask what sample size you would consider as large? Thank you, Regards Clive. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
| Free forum by Nabble | Edit this page |
