Interrater reliability Question

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Interrater reliability Question

Salbod
I'm interested in obtaining an inter-rater reliability coefficient for 15 speech samples being rated by 20 raters on a 50-pt scale. Is the coefficient I'm looking for found in Scale->Intra class correlation coefficent?

TIA,

Stephen Salbod, Pace University, NYC
Reply | Threaded
Open this post in threaded view
|

Re: Interrater reliability Question

bdates
Stephen,

That's exactly right.  The samples will constitute the cases, raters the variables, and the scores will be in the cells.

Brian

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Salbod
Sent: Wednesday, April 09, 2014 2:43 PM
To: [hidden email]
Subject: Interrater reliability Question

I'm interested in obtaining an inter-rater reliability coefficient for 15
speech samples being rated by 20 raters on a 50-pt scale. Is the coefficient
I'm looking for found in Scale->Intra class correlation coefficent?

TIA,

Stephen Salbod, Pace University, NYC



--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Interrater-reliability-Question-tp5725375.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Interrater reliability Question

Salbod
Thanks Brian. I thinking of going with Alpha rather than ICC. --Steve

-----Original Message-----
From: Dates, Brian [mailto:[hidden email]]
Sent: Wednesday, April 09, 2014 5:10 PM
To: Salbod, Mr. Stephen; [hidden email]
Subject: RE: Interrater reliability Question

Stephen,

That's exactly right.  The samples will constitute the cases, raters the variables, and the scores will be in the cells.

Brian

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Salbod
Sent: Wednesday, April 09, 2014 2:43 PM
To: [hidden email]
Subject: Interrater reliability Question

I'm interested in obtaining an inter-rater reliability coefficient for 15 speech samples being rated by 20 raters on a 50-pt scale. Is the coefficient I'm looking for found in Scale->Intra class correlation coefficent?

TIA,

Stephen Salbod, Pace University, NYC



--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Interrater-reliability-Question-tp5725375.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Interrater reliability Question

Rich Ulrich
[re-posted, on nabble, with minor revision]
Given 20 raters, I would definitely want to look at the simple ANOVA,
Raters X Samples, and look at the spreads of the means.  Since every
reliability statement is a reflection of both the scale and the population,
it is good to confirm that the sub-sample shows the expected sort of variation.

That is:  It is disappointing if all scores lump at one end of the scale, which
could yield low reliability;  or, a strong result is non-robust if it depends on one or
two extreme cases.   This is the ICC information:  Are Samples (cases) different?
 
And the Rater's means can show if there is systematic rater difference, if that
is something that matters to you.  Rater differences in mean will lower the ICC
some, so they probably should be mentioned if they exist.

The correlation between raters gives directly what is the basis of alpha
information, and that should be interesting, too, especially if there is value
in detecting which are the least reliable raters - for purposes of training, or
more detailed reporting, etc.


--
Rich Ulrich
Reply | Threaded
Open this post in threaded view
|

Re: Interrater reliability Question

Salbod
Hi Rich,  Thank you for the clarification. Samples are similar. I'm going with the ICC. Do you by any chance know of article that compares alpha to the ICC?

--Steve

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Rich Ulrich
Sent: Thursday, April 10, 2014 3:20 AM
To: [hidden email]
Subject: Re: Interrater reliability Question

[re-posted, on nabble, with minor revision] Given 20 raters, I would definitely want to look at the simple ANOVA, Raters X Samples, and look at the spreads of the means.  Since every reliability statement is a reflection of both the scale and the population, it is good to confirm that the sub-sample shows the expected sort of variation.

That is:  It is disappointing if all scores lump at one end of the scale, which could yield low reliability;  or, a strong result is non-robust if it depends on one or
two extreme cases.   This is the ICC information:  Are Samples (cases)
different?

And the Rater's means can show if there is systematic rater difference, if that is something that matters to you.  Rater differences in mean will lower the ICC some, so they probably should be mentioned if they exist.

The correlation between raters gives directly what is the basis of alpha information, and that should be interesting, too, especially if there is value in detecting which are the least reliable raters - for purposes of training, or more detailed reporting, etc.


--
Rich Ulrich




--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Interrater-reliability-Question-tp5725375p5725385.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Interrater reliability Question

bdates
Stephen,

This may be helpful.  I can't find my e-copy right now, but do have the referenc.  If I recall, it gets at your question.

Bravo G, Potvin L. J Clin Epidemiol. 1991;44(4-5):381-90.
Estimating the reliability of continuous measures with Cronbach's alpha or the intraclass correlation coefficient: toward the integration of two traditions.

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Salbod, Mr. Stephen
Sent: Friday, April 11, 2014 3:29 PM
To: [hidden email]
Subject: Re: Interrater reliability Question

Hi Rich,  Thank you for the clarification. Samples are similar. I'm going with the ICC. Do you by any chance know of article that compares alpha to the ICC?

--Steve

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Rich Ulrich
Sent: Thursday, April 10, 2014 3:20 AM
To: [hidden email]
Subject: Re: Interrater reliability Question

[re-posted, on nabble, with minor revision] Given 20 raters, I would definitely want to look at the simple ANOVA, Raters X Samples, and look at the spreads of the means.  Since every reliability statement is a reflection of both the scale and the population, it is good to confirm that the sub-sample shows the expected sort of variation.

That is:  It is disappointing if all scores lump at one end of the scale, which could yield low reliability;  or, a strong result is non-robust if it depends on one or
two extreme cases.   This is the ICC information:  Are Samples (cases)
different?

And the Rater's means can show if there is systematic rater difference, if that is something that matters to you.  Rater differences in mean will lower the ICC some, so they probably should be mentioned if they exist.

The correlation between raters gives directly what is the basis of alpha information, and that should be interesting, too, especially if there is value in detecting which are the least reliable raters - for purposes of training, or more detailed reporting, etc.


--
Rich Ulrich




--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Interrater-reliability-Question-tp5725375p5725385.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Interrater reliability Question

Art Kendall
if anybody could send me an e-copy I would appreciate it.
Art Kendall
Social Research Consultants
On 4/11/2014 3:59 PM, bdates [via SPSSX Discussion] wrote:
Stephen,

This may be helpful.  I can't find my e-copy right now, but do have the referenc.  If I recall, it gets at your question.

Bravo G, Potvin L. J Clin Epidemiol. 1991;44(4-5):381-90.
Estimating the reliability of continuous measures with Cronbach's alpha or the intraclass correlation coefficient: toward the integration of two traditions.

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Salbod, Mr. Stephen
Sent: Friday, April 11, 2014 3:29 PM
To: [hidden email]
Subject: Re: Interrater reliability Question

Hi Rich,  Thank you for the clarification. Samples are similar. I'm going with the ICC. Do you by any chance know of article that compares alpha to the ICC?

--Steve

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Rich Ulrich
Sent: Thursday, April 10, 2014 3:20 AM
To: [hidden email]
Subject: Re: Interrater reliability Question

[re-posted, on nabble, with minor revision] Given 20 raters, I would definitely want to look at the simple ANOVA, Raters X Samples, and look at the spreads of the means.  Since every reliability statement is a reflection of both the scale and the population, it is good to confirm that the sub-sample shows the expected sort of variation.

That is:  It is disappointing if all scores lump at one end of the scale, which could yield low reliability;  or, a strong result is non-robust if it depends on one or
two extreme cases.   This is the ICC information:  Are Samples (cases)
different?

And the Rater's means can show if there is systematic rater difference, if that is something that matters to you.  Rater differences in mean will lower the ICC some, so they probably should be mentioned if they exist.

The correlation between raters gives directly what is the basis of alpha information, and that should be interesting, too, especially if there is value in detecting which are the least reliable raters - for purposes of training, or more detailed reporting, etc.


--
Rich Ulrich




--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Interrater-reliability-Question-tp5725375p5725385.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD



If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/Interrater-reliability-Question-tp5725375p5725403.html
To start a new topic under SPSSX Discussion, email [hidden email]
To unsubscribe from SPSSX Discussion, click here.
NAML

Art Kendall
Social Research Consultants
Reply | Threaded
Open this post in threaded view
|

Re: Interrater reliability Question

Salbod
In reply to this post by bdates
Thanks, Brian. The article looks right up my alley. -Steve

-----Original Message-----
From: Dates, Brian [mailto:[hidden email]]
Sent: Friday, April 11, 2014 3:52 PM
To: Salbod, Mr. Stephen; [hidden email]
Subject: RE: Interrater reliability Question

Stephen,

This may be helpful.  I can't find my e-copy right now, but do have the referenc.  If I recall, it gets at your question.

Bravo G, Potvin L. J Clin Epidemiol. 1991;44(4-5):381-90.
Estimating the reliability of continuous measures with Cronbach's alpha or the intraclass correlation coefficient: toward the integration of two traditions.

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Salbod, Mr. Stephen
Sent: Friday, April 11, 2014 3:29 PM
To: [hidden email]
Subject: Re: Interrater reliability Question

Hi Rich,  Thank you for the clarification. Samples are similar. I'm going with the ICC. Do you by any chance know of article that compares alpha to the ICC?

--Steve

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Rich Ulrich
Sent: Thursday, April 10, 2014 3:20 AM
To: [hidden email]
Subject: Re: Interrater reliability Question

[re-posted, on nabble, with minor revision] Given 20 raters, I would definitely want to look at the simple ANOVA, Raters X Samples, and look at the spreads of the means.  Since every reliability statement is a reflection of both the scale and the population, it is good to confirm that the sub-sample shows the expected sort of variation.

That is:  It is disappointing if all scores lump at one end of the scale, which could yield low reliability;  or, a strong result is non-robust if it depends on one or
two extreme cases.   This is the ICC information:  Are Samples (cases)
different?

And the Rater's means can show if there is systematic rater difference, if that is something that matters to you.  Rater differences in mean will lower the ICC some, so they probably should be mentioned if they exist.

The correlation between raters gives directly what is the basis of alpha information, and that should be interesting, too, especially if there is value in detecting which are the least reliable raters - for purposes of training, or more detailed reporting, etc.


--
Rich Ulrich




--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Interrater-reliability-Question-tp5725375p5725385.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Interrater reliability Question

Salbod
In reply to this post by Art Kendall
Hi Art: Here is a copy of the Bravo and Potvin article. Enjoy, Steve

Bravo,_Potvin-Alpha1991.pdf
Reply | Threaded
Open this post in threaded view
|

Re: Interrater reliability Question

Art Kendall
Thank you.
Art Kendall
Social Research Consultants
On 4/14/2014 5:24 PM, Salbod [via SPSSX Discussion] wrote:
Hi Art: Here is a copy of the Bravo and Potvin article. Enjoy, Steve

Bravo,_Potvin-Alpha1991.pdf


If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/Interrater-reliability-Question-tp5725375p5725437.html
To start a new topic under SPSSX Discussion, email [hidden email]
To unsubscribe from SPSSX Discussion, click here.
NAML

Art Kendall
Social Research Consultants