I'm interested in obtaining an inter-rater reliability coefficient for 15 speech samples being rated by 20 raters on a 50-pt scale. Is the coefficient I'm looking for found in Scale->Intra class correlation coefficent?
TIA, Stephen Salbod, Pace University, NYC |
Stephen,
That's exactly right. The samples will constitute the cases, raters the variables, and the scores will be in the cells. Brian -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Salbod Sent: Wednesday, April 09, 2014 2:43 PM To: [hidden email] Subject: Interrater reliability Question I'm interested in obtaining an inter-rater reliability coefficient for 15 speech samples being rated by 20 raters on a 50-pt scale. Is the coefficient I'm looking for found in Scale->Intra class correlation coefficent? TIA, Stephen Salbod, Pace University, NYC -- View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Interrater-reliability-Question-tp5725375.html Sent from the SPSSX Discussion mailing list archive at Nabble.com. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
Thanks Brian. I thinking of going with Alpha rather than ICC. --Steve
-----Original Message----- From: Dates, Brian [mailto:[hidden email]] Sent: Wednesday, April 09, 2014 5:10 PM To: Salbod, Mr. Stephen; [hidden email] Subject: RE: Interrater reliability Question Stephen, That's exactly right. The samples will constitute the cases, raters the variables, and the scores will be in the cells. Brian -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Salbod Sent: Wednesday, April 09, 2014 2:43 PM To: [hidden email] Subject: Interrater reliability Question I'm interested in obtaining an inter-rater reliability coefficient for 15 speech samples being rated by 20 raters on a 50-pt scale. Is the coefficient I'm looking for found in Scale->Intra class correlation coefficent? TIA, Stephen Salbod, Pace University, NYC -- View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Interrater-reliability-Question-tp5725375.html Sent from the SPSSX Discussion mailing list archive at Nabble.com. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
[re-posted, on nabble, with minor revision]
Given 20 raters, I would definitely want to look at the simple ANOVA, Raters X Samples, and look at the spreads of the means. Since every reliability statement is a reflection of both the scale and the population, it is good to confirm that the sub-sample shows the expected sort of variation. That is: It is disappointing if all scores lump at one end of the scale, which could yield low reliability; or, a strong result is non-robust if it depends on one or two extreme cases. This is the ICC information: Are Samples (cases) different? And the Rater's means can show if there is systematic rater difference, if that is something that matters to you. Rater differences in mean will lower the ICC some, so they probably should be mentioned if they exist. The correlation between raters gives directly what is the basis of alpha information, and that should be interesting, too, especially if there is value in detecting which are the least reliable raters - for purposes of training, or more detailed reporting, etc. -- Rich Ulrich |
Hi Rich, Thank you for the clarification. Samples are similar. I'm going with the ICC. Do you by any chance know of article that compares alpha to the ICC?
--Steve -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Rich Ulrich Sent: Thursday, April 10, 2014 3:20 AM To: [hidden email] Subject: Re: Interrater reliability Question [re-posted, on nabble, with minor revision] Given 20 raters, I would definitely want to look at the simple ANOVA, Raters X Samples, and look at the spreads of the means. Since every reliability statement is a reflection of both the scale and the population, it is good to confirm that the sub-sample shows the expected sort of variation. That is: It is disappointing if all scores lump at one end of the scale, which could yield low reliability; or, a strong result is non-robust if it depends on one or two extreme cases. This is the ICC information: Are Samples (cases) different? And the Rater's means can show if there is systematic rater difference, if that is something that matters to you. Rater differences in mean will lower the ICC some, so they probably should be mentioned if they exist. The correlation between raters gives directly what is the basis of alpha information, and that should be interesting, too, especially if there is value in detecting which are the least reliable raters - for purposes of training, or more detailed reporting, etc. -- Rich Ulrich -- View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Interrater-reliability-Question-tp5725375p5725385.html Sent from the SPSSX Discussion mailing list archive at Nabble.com. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
Stephen,
This may be helpful. I can't find my e-copy right now, but do have the referenc. If I recall, it gets at your question. Bravo G, Potvin L. J Clin Epidemiol. 1991;44(4-5):381-90. Estimating the reliability of continuous measures with Cronbach's alpha or the intraclass correlation coefficient: toward the integration of two traditions. -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Salbod, Mr. Stephen Sent: Friday, April 11, 2014 3:29 PM To: [hidden email] Subject: Re: Interrater reliability Question Hi Rich, Thank you for the clarification. Samples are similar. I'm going with the ICC. Do you by any chance know of article that compares alpha to the ICC? --Steve -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Rich Ulrich Sent: Thursday, April 10, 2014 3:20 AM To: [hidden email] Subject: Re: Interrater reliability Question [re-posted, on nabble, with minor revision] Given 20 raters, I would definitely want to look at the simple ANOVA, Raters X Samples, and look at the spreads of the means. Since every reliability statement is a reflection of both the scale and the population, it is good to confirm that the sub-sample shows the expected sort of variation. That is: It is disappointing if all scores lump at one end of the scale, which could yield low reliability; or, a strong result is non-robust if it depends on one or two extreme cases. This is the ICC information: Are Samples (cases) different? And the Rater's means can show if there is systematic rater difference, if that is something that matters to you. Rater differences in mean will lower the ICC some, so they probably should be mentioned if they exist. The correlation between raters gives directly what is the basis of alpha information, and that should be interesting, too, especially if there is value in detecting which are the least reliable raters - for purposes of training, or more detailed reporting, etc. -- Rich Ulrich -- View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Interrater-reliability-Question-tp5725375p5725385.html Sent from the SPSSX Discussion mailing list archive at Nabble.com. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
if anybody could send
me an e-copy I would appreciate it.
Art Kendall Social Research ConsultantsOn 4/11/2014 3:59 PM, bdates [via SPSSX Discussion] wrote: Stephen,
Art Kendall
Social Research Consultants |
In reply to this post by bdates
Thanks, Brian. The article looks right up my alley. -Steve
-----Original Message----- From: Dates, Brian [mailto:[hidden email]] Sent: Friday, April 11, 2014 3:52 PM To: Salbod, Mr. Stephen; [hidden email] Subject: RE: Interrater reliability Question Stephen, This may be helpful. I can't find my e-copy right now, but do have the referenc. If I recall, it gets at your question. Bravo G, Potvin L. J Clin Epidemiol. 1991;44(4-5):381-90. Estimating the reliability of continuous measures with Cronbach's alpha or the intraclass correlation coefficient: toward the integration of two traditions. -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Salbod, Mr. Stephen Sent: Friday, April 11, 2014 3:29 PM To: [hidden email] Subject: Re: Interrater reliability Question Hi Rich, Thank you for the clarification. Samples are similar. I'm going with the ICC. Do you by any chance know of article that compares alpha to the ICC? --Steve -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Rich Ulrich Sent: Thursday, April 10, 2014 3:20 AM To: [hidden email] Subject: Re: Interrater reliability Question [re-posted, on nabble, with minor revision] Given 20 raters, I would definitely want to look at the simple ANOVA, Raters X Samples, and look at the spreads of the means. Since every reliability statement is a reflection of both the scale and the population, it is good to confirm that the sub-sample shows the expected sort of variation. That is: It is disappointing if all scores lump at one end of the scale, which could yield low reliability; or, a strong result is non-robust if it depends on one or two extreme cases. This is the ICC information: Are Samples (cases) different? And the Rater's means can show if there is systematic rater difference, if that is something that matters to you. Rater differences in mean will lower the ICC some, so they probably should be mentioned if they exist. The correlation between raters gives directly what is the basis of alpha information, and that should be interesting, too, especially if there is value in detecting which are the least reliable raters - for purposes of training, or more detailed reporting, etc. -- Rich Ulrich -- View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Interrater-reliability-Question-tp5725375p5725385.html Sent from the SPSSX Discussion mailing list archive at Nabble.com. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
In reply to this post by Art Kendall
|
Thank you.
Art Kendall Social Research ConsultantsOn 4/14/2014 5:24 PM, Salbod [via SPSSX Discussion] wrote: Hi Art: Here is a copy of the Bravo and Potvin article. Enjoy, Steve
Art Kendall
Social Research Consultants |
Free forum by Nabble | Edit this page |