|
This post was updated on .
Hi there, :wave:
I hope someone can help me with this. I am currently doing analysis with my research but found many questions. The first question I have is about the internal consistency reliability. In a within-subject design, I created a 13-item of self-rated 9-point scale, using it to assess the emotional responses elicited by three different emotion elicitors (Neutral, positive, negative). The scale was used to assess participant’s emotions after every emotion elicitor was presented to participants. Therefore, emotional responses of each participant by using this 13-item scale were measured three times. I want to check the internal consistency reliability of this 13-item scale. In this case, should I test the internal consistency of the scale 3 times and provide 3 times of Cronbach’ Alpha coefficient for each condition seperately or can I merge data from all three conditions and only provide one Cronbach’ Alpha at one go? Thank you so much in advance. C_SH |
|
Hi there,
You'll need to check the internal reliability separately for each stimulus condition - I'm assuming part of your hypothesis is that the different stimuli cause different scores (at minimum, neutral stimuli versus the other two). Assuming that you have 6 experimental groups and a completely balanced design (for subject numbers), you should be able to ignore presentation order as an effect when testing the internal reliability. Cheers Michelle -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of crossover Sent: Tuesday, 12 October 2010 1:39 a.m. To: [hidden email] Subject: internal consistency reliability. Hi there, I have a question about the internal consistency reliability. In a within-subject design, I created a 13-item of self-repeated 9-point scale, using it to assess the emotional responses elicited by three different emotion elicitors. These emotion elicitors are three different types of stimuli (Neutral, positive, negative). The scale was used each time after one of the emotion elicitors was presented to participants. Therefore, it was used three times in total individually. I want to check the internal consistency reliability of this 13-item scale. In this case, should I check the internal consistency of the scale three times seperatly for each condition separately by test Cronbach’ Alpha reliability or can I merge data from all three conditions and only test the Cronbach’ Alpha reliability one time? Will this be reliable? Thank you so much in advance. C_SH -- View this message in context: http://spssx-discussion.1045642.n5.nabble.com/internal-consistency-reliability-tp3207418p3207418.html Sent from the SPSSX Discussion mailing list archive at Nabble.com. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ********************************************************************** This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept by MIMEsweeper for the presence of computer viruses. www.clearswift.com ********************************************************************** ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
Hello Michelle,
Thank you so much for your reply. Yes. You are right. 6 couterbalanced groups in the design. I hope you don't mind I ask further questions here or should I post it with another title? Now I have encoutered another question about testing the differences of each item in the scale among conditions with 36 participants. I was suggested to use Tukey comparison (post hoc) to compare different ratings among three conditions via ANOVA. However, I checked the normality and found these data are not normal distributed, then I did Friedman ANOVA to check the general differences among three condition and then Wilcoxon signed-rank test for post hoc testing one item by one item. Am I doing this correctly? I checked two items by using above non-paramatric testing, and found that these two results showed the same as what parametric testing ANOVA did! Why they showed similar results? Should I just using ANOVA to test then? Thank you very much. C_SH |
|
Hi again :)
What are your research hypotheses? Primarily, are you trying to develop a rating scale for use in further research, e.g. looking to see which items should be removed (so the main point of this work is rating scale development), or are you trying to determine whether your rating scale successfully discriminates between subjects exposed to different types of emotional stimuli? Or both? For your question about whether it is okay to use ANOVA on non-normal data, have a read of this page: http://www.google.co.nz/url?sa=t&source=web&cd=2&ved=0CB0QFjAB&url=http%3A%2F%2Fpsyphz.psych.wisc.edu%2F~shackman%2Fpsy410_StatisticsRulesofThumbforViolationsofANOVAAssumptions.doc&rct=j&q=anova%20assumption%20of%20normality&ei=C3-zTPblH4qEvgP1_u36DQ&usg=AFQjCNE5uKc9msH-Y1SLdE5Xw4l7TBmINQ HTH Cheers Michelle -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of crossover Sent: Tuesday, 12 October 2010 9:38 a.m. To: [hidden email] Subject: Re: internal consistency reliability. [Sec: UNOFFICIAL] Hello Michelle, Thank you so much for your reply. Yes. You are right. 6 couterbalanced groups in the design. I hope you don't mind I ask further questions here or should I post it with another title? Now I have encoutered another question about testing the differences of each item in the scale among conditions with 36 participants. I was suggested to use Tukey comparison (post hoc) to compare different ratings among three conditions via ANOVA. However, I checked the normality and found these data are not normal distributed, then I did Friedman ANOVA to check the general differences among three condition and then Wilcoxon signed-rank test for post hoc testing one item by one item. Am I doing this correctly? I checked two items by using above non-paramatric testing, and found that these two results showed the same as what parametric testing ANOVA did! Why they showed similar results? Should I just using ANOVA to test then? Thank you very much. C_SH -- View this message in context: http://spssx-discussion.1045642.n5.nabble.com/internal-consistency-reliability-tp3207418p3208098.html Sent from the SPSSX Discussion mailing list archive at Nabble.com. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ********************************************************************** This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept by MIMEsweeper for the presence of computer viruses. www.clearswift.com ********************************************************************** ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
Hey, Michelle.
Thank you. I think I am doing both. I used another pilot study with similar within –subject design to develop this scale, involving 33 participants. Originally it was a 15-item scale. I tested the internal consistency reliability for each stimuli condition as you suggested. I then deleted two items and used this 13-item scale in the main study to differentiate positive affects that were elicited in these three types of stimuli. I think since 2 items were deleted for the main study, so I would better test and show its reliability again in the main study. Then responses to this scale in each stimulus condition also provide the information to discriminate the different types of affects elicited by different types of stimuli. I will group 4 certain types of affects from these 13 items based on the inspection obtained from the pilot study and theoretical support from other research. This is the reason why I tried to use ANOVA and post hoc to discriminate the differences among conditions but found that my data in each condition is not normal distributed since my sample is only 36. The non-paramatric test showed similar results as ANOVA. Mm In this case, should I indicate the normality and then choose to use ANOVA post hoc to report my restuls? C_SH :-) |
|
Hi there again,
Sorry tied up at work. Yes, because you amended the scale and used the shorter scale in the latest study, you're going to need to re-establish the reliability. Out of curiosity, have you looked at doing something like a principal components analysis to see which scale items weight for measuring subject reactions within each of the three sets of emotional stimulus conditions? I'm wondering if some type of dimension reduction technique would be useful there. It is possible, for example, that one item to work well for one type of emotional stimulus, another item to work for both (positive and negative) emotional conditions, etc. Would this help with what you are trying to do? That would allow you to do the grouping, based on your research data. How non-normal is your data? Given your small sample size (36) it is theoretically possible that the scores are normally distributed in the population, but that you just happened to get a non-normal distribution? Have you tried testing your results to see if they match another type of distribution - I don't know the method to use to do this in SPSS. Cheers Michelle -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of crossover Sent: Wednesday, 13 October 2010 11:26 p.m. To: [hidden email] Subject: Re: internal consistency reliability. [Sec: UNOFFICIAL] Hey, Michelle. Thank you. I think I am doing both. I used another pilot study with similar within –subject design to develop this scale, involving 33 participants. Originally it was a 15-item scale. I tested the internal consistency reliability for each stimuli condition as you suggested. I then deleted two items and used this 13-item scale in the main study to differentiate positive affects that were elicited in these three types of stimuli. I think since 2 items were deleted for the main study, so I would better test and show its reliability again in the main study. Then responses to this scale in each stimulus condition also provide the information to discriminate the different types of affects elicited by different types of stimuli. I will group 4 certain types of affects from these 13 items based on the inspection obtained from the pilot study and theoretical support from other research. This is the reason why I tried to use ANOVA and post hoc to discriminate the differences among conditions but found that my data in each condition is not normal distributed since my sample is only 36. The non-paramatric test showed similar results as ANOVA. Mm In this case, should I indicate the normality and then choose to use ANOVA post hoc to report my restuls? C_SH :-) -- View this message in context: http://spssx-discussion.1045642.n5.nabble.com/internal-consistency-reliability-tp3207418p3210245.html Sent from the SPSSX Discussion mailing list archive at Nabble.com. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ********************************************************************** This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept by MIMEsweeper for the presence of computer viruses. www.clearswift.com ********************************************************************** ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
Hi Michelle,
:-) Thank you!! I didn’t run principal components analysis to group the items. I think 36 participants seem like too small sample to do this? The rational for the grouping from the pilot study is based on my theoretical assumptions which align with the results from the pilot study (In my theory two or three items are selected). I think this might be reasonable!?. I did check the normality (Shapiro-Wilk) and homogeneity (Levene’s) of each item in these three different conditions. I found something really confusing me, for example A item is not normally distributed in one condition but normally distributed in the other two conditions. And some items’ variances are not significantly different but some are. In order to move on, I have checked both parametric (ANOVA and Tukey' comparison) and non-parametric (Friedman test and Wilcoxon Signed Ranks test) with 4 characterised groups based on the results from my pilot study. I found that the results are the same. In this case, I don’t know whether I should report both types of test or just choose to report non-parametric tests? Do you have any idea? Thank you. C_SH |
|
Hi there,
You could always try PCA if you have some time, and see what you get. :) Given your data seems to be both (1) non-normal in distribution and (2) shows heterogeneity of variance, which are two requirements for parametric tests such as ANOVA - I would report the results of the non-parametric tests. Cheers Michelle -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of crossover Sent: Thursday, 14 October 2010 8:29 a.m. To: [hidden email] Subject: Re: internal consistency reliability. [Sec: UNOFFICIAL] Hi Michelle, :-) Thank you!! I didn’t run principal components analysis to group the items. I think 36 participants seem like too small sample to do this? The rational for the grouping from the pilot study is based on my theoretical assumptions which align with the results from the pilot study (In my theory two or three items are selected). I think this might be reasonable!?. I did check the normality (Shapiro-Wilk) and homogeneity (Levene’s) of each item in these three different conditions. I found something really confusing me, for example A item is not normally distributed in one condition but normally distributed in the other two conditions. And some items’ variances are not significantly different but some are. In order to move on, I have checked both parametric (ANOVA and Tukey' comparison) and non-parametric (Friedman test and Wilcoxon Signed Ranks test) with 4 characterised groups based on the results from my pilot study. I found that the results are the same. In this case, I don’t know whether I should report both types of test or just choose to report non-parametric tests? Do you have any idea? Thank you. C_SH -- View this message in context: http://spssx-discussion.1045642.n5.nabble.com/internal-consistency-reliability-tp3207418p3211075.html Sent from the SPSSX Discussion mailing list archive at Nabble.com. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ********************************************************************** This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept by MIMEsweeper for the presence of computer viruses. www.clearswift.com ********************************************************************** ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
| Free forum by Nabble | Edit this page |
