Interrater agreement: extra raters

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Interrater agreement: extra raters

henryilian

Hi,

 

I’m doing an interrater agreement study on a case-reading instrument. There are normally four raters using a rating instrument with 120 items. In this study two other raters are participating as experts. I’m not sure what the presence of experts contributes to the rating portion of study, since they aren’t regular raters. Their participation in a review session is important, but I’m not sure about their effect on the statistics to identify problematic items. Any thoughts on this?

 

Henry

 



Confidentiality Notice: This e-mail communication, and any attachments, contains confidential and privileged information for the exclusive use of the recipient(s) named above. If you are not an intended recipient, or the employee or agent responsible to deliver it to an intended recipient, you are hereby notified that you have received this communication in error and that any review, disclosure, dissemination, distribution or copying of it or its contents is prohibited. If you have received this communication in error, please notify me immediately by replying to this message and delete this communication from your computer. Thank you.
Reply | Threaded
Open this post in threaded view
|

Re: Interrater agreement: extra raters

bdates

Lot’s of thoughts, since I’m in the middle of an inter-rater study as I write.  If you consider your experts as a ‘gold standard’, then I’d test each of your regular raters against one of them (you don’t need both) to see how they agree with the standard.  The actual study, however, should just use the regular raters. The exception typically occurs when the experts are going to conduct ongoing reviews and ‘corrections?’ of ratings.  I don’t favor this last model, but there are lots of folks who disagree with me and do it anyway. Clearly there are lots of rater combinations, but what you really need to prove that the results of the ratings are reliable is the four regular raters who are actually going to generate the data.

 

Brian

 


From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Ilian, Henry (ACS)
Sent: Friday, June 14, 2013 11:22 AM
To: [hidden email]
Subject: Interrater agreement: extra raters

 

Hi,

 

I’m doing an interrater agreement study on a case-reading instrument. There are normally four raters using a rating instrument with 120 items. In this study two other raters are participating as experts. I’m not sure what the presence of experts contributes to the rating portion of study, since they aren’t regular raters. Their participation in a review session is important, but I’m not sure about their effect on the statistics to identify problematic items. Any thoughts on this?

 

Henry

 

 


Confidentiality Notice: This e-mail communication, and any attachments, contains confidential and privileged information for the exclusive use of the recipient(s) named above. If you are not an intended recipient, or the employee or agent responsible to deliver it to an intended recipient, you are hereby notified that you have received this communication in error and that any review, disclosure, dissemination, distribution or copying of it or its contents is prohibited. If you have received this communication in error, please notify me immediately by replying to this message and delete this communication from your computer. Thank you.

Reply | Threaded
Open this post in threaded view
|

Re: Interrater agreement: extra raters

Art Kendall
In reply to this post by henryilian
Is this part of an effort to train raters do that they are consistent with the experts?
Art Kendall
Social Research Consultants
On 6/14/2013 11:24 AM, Ilian, Henry (ACS) [via SPSSX Discussion] wrote:

Hi,

 

I’m doing an interrater agreement study on a case-reading instrument. There are normally four raters using a rating instrument with 120 items. In this study two other raters are participating as experts. I’m not sure what the presence of experts contributes to the rating portion of study, since they aren’t regular raters. Their participation in a review session is important, but I’m not sure about their effect on the statistics to identify problematic items. Any thoughts on this?

 

Henry

 



Confidentiality Notice: This e-mail communication, and any attachments, contains confidential and privileged information for the exclusive use of the recipient(s) named above. If you are not an intended recipient, or the employee or agent responsible to deliver it to an intended recipient, you are hereby notified that you have received this communication in error and that any review, disclosure, dissemination, distribution or copying of it or its contents is prohibited. If you have received this communication in error, please notify me immediately by replying to this message and delete this communication from your computer. Thank you.



If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/Interrater-agreement-extra-raters-tp5720730.html
To start a new topic under SPSSX Discussion, email [hidden email]
To unsubscribe from SPSSX Discussion, click here.
NAML

Art Kendall
Social Research Consultants
Reply | Threaded
Open this post in threaded view
|

Re: Interrater agreement: extra raters

Art Kendall
In reply to this post by henryilian
Please give a more detailed overview of the project.

What is the response scale?
Are the 120 items parts of summative scales?
What concepts are the summative scales designed to measure?
Who are the "experts"?

By "case reading" do you mean something like legal cases?

Once the raters are trained I presume you desire that their combined scores are very much like the combined scores of the experts. Is that correct?

Art Kendall
Social Research Consultants
On 6/14/2013 12:15 PM, Ilian, Henry (ACS) wrote:

That’s exactly what the purpose is. I’m afraid that using the experts (the regular raters are already supposed to be expert, but the two added raters have more expertise) will throw off the levels of agreement/disagreement. I think I need to treat the two experts’ ratings differently, but I’m not clear on how.

 

From: SPSSX(r) Discussion [[hidden email]] On Behalf Of Art Kendall
Sent: Friday, June 14, 2013 12:00 PM
To: [hidden email]
Subject: Re: Interrater agreement: extra raters

 

Is this part of an effort to train raters do that they are consistent with the experts?

Art Kendall
Social Research Consultants

On 6/14/2013 11:24 AM, Ilian, Henry (ACS) [via SPSSX Discussion] wrote:

Hi,

 

I’m doing an interrater agreement study on a case-reading instrument. There are normally four raters using a rating instrument with 120 items. In this study two other raters are participating as experts. I’m not sure what the presence of experts contributes to the rating portion of study, since they aren’t regular raters. Their participation in a review session is important, but I’m not sure about their effect on the statistics to identify problematic items. Any thoughts on this?

 

Henry

 

 


Confidentiality Notice: This e-mail communication, and any attachments, contains confidential and privileged information for the exclusive use of the recipient(s) named above. If you are not an intended recipient, or the employee or agent responsible to deliver it to an intended recipient, you are hereby notified that you have received this communication in error and that any review, disclosure, dissemination, distribution or copying of it or its contents is prohibited. If you have received this communication in error, please notify me immediately by replying to this message and delete this communication from your computer. Thank you.


If you reply to this email, your message will be added to the discussion below:

http://spssx-discussion.1045642.n5.nabble.com/Interrater-agreement-extra-raters-tp5720730.html

To start a new topic under SPSSX Discussion, email [hidden email]
To unsubscribe from SPSSX Discussion, click here.
NAML

 

Art Kendall
Social Research Consultants

 


View this message in context: Re: Interrater agreement: extra raters
Sent from the SPSSX Discussion mailing list archive at Nabble.com.


Art Kendall
Social Research Consultants
Reply | Threaded
Open this post in threaded view
|

Re: Interrater agreement: extra raters

Dale
In reply to this post by henryilian

I suppose it would depend on the researcher as to what one wished to do with them.  I assume as they are “expert” they serve as a separate reference point. One would compare the other rater data to the experts.  Somewhat like one would do with a Content Validity Study I suppose.

 

Dale

 

 

Dale Pietrzak, Ed.D., LPCMH, CCMHC

Director, Office of Academic Evaluation and Assessment

University of South Dakota

Slagle Hall Room 102

414 East Clark Street

605-677-6497

 

 

 

From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Ilian, Henry (ACS)
Sent: Friday, June 14, 2013 10:22 AM
To: [hidden email]
Subject: Interrater agreement: extra raters

 

Hi,

 

I’m doing an interrater agreement study on a case-reading instrument. There are normally four raters using a rating instrument with 120 items. In this study two other raters are participating as experts. I’m not sure what the presence of experts contributes to the rating portion of study, since they aren’t regular raters. Their participation in a review session is important, but I’m not sure about their effect on the statistics to identify problematic items. Any thoughts on this?

 

Henry

 

 


Confidentiality Notice: This e-mail communication, and any attachments, contains confidential and privileged information for the exclusive use of the recipient(s) named above. If you are not an intended recipient, or the employee or agent responsible to deliver it to an intended recipient, you are hereby notified that you have received this communication in error and that any review, disclosure, dissemination, distribution or copying of it or its contents is prohibited. If you have received this communication in error, please notify me immediately by replying to this message and delete this communication from your computer. Thank you.

Reply | Threaded
Open this post in threaded view
|

Re: Interrater agreement: extra raters

henryilian

That would make the study a combined reliability/content validity study.  Would you separately analyze the four regular raters and then compare them with each of the experts. If so, what would be the best way to make the comparisons? An analysis with each of the regular raters compared to each of the experts using Cohen’s kappa would work, but there would be a great many steps and a lot of output to report. Is there a simpler way to do it?

 

 

From: Pietrzak, Dale [mailto:[hidden email]]
Sent: Friday, June 14, 2013 4:03 PM
To: Ilian, Henry (ACS); [hidden email]
Subject: RE: Interrater agreement: extra raters

 

I suppose it would depend on the researcher as to what one wished to do with them.  I assume as they are “expert” they serve as a separate reference point. One would compare the other rater data to the experts.  Somewhat like one would do with a Content Validity Study I suppose.

 

Dale

 

 

Dale Pietrzak, Ed.D., LPCMH, CCMHC

Director, Office of Academic Evaluation and Assessment

University of South Dakota

Slagle Hall Room 102

414 East Clark Street

605-677-6497

 

 

 

From: SPSSX(r) Discussion [hidden email] On Behalf Of Ilian, Henry (ACS)
Sent: Friday, June 14, 2013 10:22 AM
To: [hidden email]
Subject: Interrater agreement: extra raters

 

Hi,

 

I’m doing an interrater agreement study on a case-reading instrument. There are normally four raters using a rating instrument with 120 items. In this study two other raters are participating as experts. I’m not sure what the presence of experts contributes to the rating portion of study, since they aren’t regular raters. Their participation in a review session is important, but I’m not sure about their effect on the statistics to identify problematic items. Any thoughts on this?

 

Henry

 

 


Confidentiality Notice: This e-mail communication, and any attachments, contains confidential and privileged information for the exclusive use of the recipient(s) named above. If you are not an intended recipient, or the employee or agent responsible to deliver it to an intended recipient, you are hereby notified that you have received this communication in error and that any review, disclosure, dissemination, distribution or copying of it or its contents is prohibited. If you have received this communication in error, please notify me immediately by replying to this message and delete this communication from your computer. Thank you.

Reply | Threaded
Open this post in threaded view
|

Re: Interrater agreement: extra raters

Art Kendall
In reply to this post by henryilian
Please keep the discussion on the list.
 That way others can respond to each other and people searching the archives in the future can see what was discussed.


There are many things you can do.
1) look up "Krippendorf" in the archives.
2) treat it as sets of  (a) 6 raters (b) 4 regular raters (c) 2 expert raters.

I will be out of the country for a month but will check the list when I get back to see what others have said and how you make out.
I am doing something similar to this for an NGO through AAAS's On-call Scientists
The only difference is that the "cases" are legal cases.

There
Art Kendall
Social Research Consultants
On 6/14/2013 3:20 PM, Ilian, Henry (ACS) wrote:

Art,

Thanks.  I’ll start with the scale, it is not summative. Each item stands alone, which is why we’re particularly interested in identifying problematic items. The cases are records of child protective investigations, and the raters rate different aspects of child protective practice (tasks) in terms of whether a task was 1) not done, 2) partially accomplished or 3) fully accomplished. Each item on the instrument is a task a child protective worker must complete. Since the tasks have different degrees of complexity, some of the items have a four-category scale (representing two levels of partial completion), most have three-categories, and a few have two—a task was done or not done, e.g.  the child protective worker named a person to monitor a safety plan. There are also some items in which a particular task may not be relevant to the case, and these can be rated as not applicable (99). Some of the items require finding a notation in the case record that a task was accomplished, others require the raters’ judgment.

 

Each of the four raters reads and rates a set of cases, using the instrument. Periodically we do an interrater rater study and all four raters rate the same case. The raters are people who have read and rated a large number of cases over several years, although not as part of the current project, which is only about six months old. One of the two experts has rated more cases than the regular raters. The other has had many years of experience doing and then supervising child protective work. The raters have been trained, and we hope that their ratings will be similar to those of the experts. This is one of the things we are trying to find out.

 

Any insights you have to offer will be welcomed,

 

Henry

 

From: Art Kendall [[hidden email]]
Sent: Friday, June 14, 2013 2:43 PM
To: Ilian, Henry (ACS); [hidden email]
Subject: Re: Interrater agreement: extra raters

 

Please give a more detailed overview of the project.

What is the response scale?
Are the 120 items parts of summative scales?
What concepts are the summative scales designed to measure?
Who are the "experts"?

By "case reading" do you mean something like legal cases?

Once the raters are trained I presume you desire that their combined scores are very much like the combined scores of the experts. Is that correct?


Art Kendall
Social Research Consultants

On 6/14/2013 12:15 PM, Ilian, Henry (ACS) wrote:

That’s exactly what the purpose is. I’m afraid that using the experts (the regular raters are already supposed to be expert, but the two added raters have more expertise) will throw off the levels of agreement/disagreement. I think I need to treat the two experts’ ratings differently, but I’m not clear on how.

 

From: SPSSX(r) Discussion [[hidden email]] On Behalf Of Art Kendall
Sent: Friday, June 14, 2013 12:00 PM
To: [hidden email]
Subject: Re: Interrater agreement: extra raters

 

Is this part of an effort to train raters do that they are consistent with the experts?


Art Kendall
Social Research Consultants

On 6/14/2013 11:24 AM, Ilian, Henry (ACS) [via SPSSX Discussion] wrote:

Hi,

 

I’m doing an interrater agreement study on a case-reading instrument. There are normally four raters using a rating instrument with 120 items. In this study two other raters are participating as experts. I’m not sure what the presence of experts contributes to the rating portion of study, since they aren’t regular raters. Their participation in a review session is important, but I’m not sure about their effect on the statistics to identify problematic items. Any thoughts on this?

 

Henry

 

 


Confidentiality Notice: This e-mail communication, and any attachments, contains confidential and privileged information for the exclusive use of the recipient(s) named above. If you are not an intended recipient, or the employee or agent responsible to deliver it to an intended recipient, you are hereby notified that you have received this communication in error and that any review, disclosure, dissemination, distribution or copying of it or its contents is prohibited. If you have received this communication in error, please notify me immediately by replying to this message and delete this communication from your computer. Thank you.



If you reply to this email, your message will be added to the discussion below:

http://spssx-discussion.1045642.n5.nabble.com/Interrater-agreement-extra-raters-tp5720730.html

To start a new topic under SPSSX Discussion, email [hidden email]
To unsubscribe from SPSSX Discussion, click here.
NAML

 

Art Kendall
Social Research Consultants

 


View this message in context: Re: Interrater agreement: extra raters
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

 


Art Kendall
Social Research Consultants