Login  Register

Re: Inter-rater agreement for multiple raters or something else?

Posted by bdates on Nov 05, 2014; 1:56pm
URL: http://spssx-discussion.165.s1.nabble.com/Inter-rater-agreement-for-multiple-raters-or-something-else-tp5727786p5727794.html

Mack,

The Excel sheet on Jason's website is mine, but there are problems with it. For some reason, it keeps getting corrupted, so I'd be careful about the results.  I'll send my macro offline. More to the point, Fleiss is one of the few authors that provided 'official' formulae for category kappa's as well as an overall solution. I'd be interested if he actually had formulae for individual cases/subjects. I've never seen that in the literature. As an idea, you could write syntax to set up a loop for each item that would count each of the four values assigned to your diagnoses for each case, then compute a variable that would count the number of diagnoses with more than one, or two, or ... occurrences (whatever value you set as a cutoff). That would give you an idea of the raw agreement and help distinguish  'difficult' patients from 'easy' patients.



Brian Dates, M.A.
Director of Evaluation and Research | Evaluation & Research | Southwest Counseling Solutions
Southwest Solutions
1700 Waterman, Detroit, MI 48209
313-841-8900 (x7442) office | 313-849-2702 fax
[hidden email] | www.swsol.org


-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of MJury
Sent: Tuesday, November 04, 2014 6:03 PM
To: [hidden email]
Subject: Re: Inter-rater agreement for multiple raters or something else?

Dear Art and David, thanks so much for your interest!

A, B, C and D are four possible diagnostic categories and raters were asked to choose only one of them (mutually exclusive and exhaustive categories).

Patients are arranged in rows and raters in columns. I calculated overall Fleiss kappa for all patients and all raters but I would be also interested in identifying patients that are the most debatable/controversial from the diagnostic point of view. I am not sure whether Fleiss kappa is the best solution here as I understand its main goal is to assess reliability of raters while I am more interested in recognizing clinical phenotypes that cause disagreement between raters. Maybe Fleiss kappa's pi (=the extent to which raters agree for the i-th subject) would be appropriate? However the Fleiss kappa Excel spreadsheet I downloaded from Jason E. King's website does not calculate that. Brian, thanks a lot for your kindness, does your macro calculate pi for Fleiss kappa?

I would appreciate any comments.

With best regards,

Mack



--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Inter-rater-agreement-for-multiple-raters-or-something-else-tp5727786p5727790.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD