Login  Register

Inter-rater agreement for multiple raters or something else?

Posted by MJury on Nov 04, 2014; 3:31pm
URL: http://spssx-discussion.165.s1.nabble.com/Inter-rater-agreement-for-multiple-raters-or-something-else-tp5727786.html


Hi everyone!

I would appreciate any comments that would help me analyse my data. The project was about sending clinical data of 12 patients to 30 clinicians to ask for their diagnosis (A, B, C or D). We have the results back and would like to assess agreement between raters. I did Fleiss generalized kappa for all raters and subjects and found out that raters differ a lot in their opinions. Now I would like to assess which particular patients had the highest variability in diagnosis to identify clinical phenotypes that are most challenging for clinicians. I did Fleiss kappa for multiple raters and one subject but that does not seem to work here as it returns similar values for all patients.

Could you please give me some tips please as I think I'm missing something here :)

With warm regards and many thanks,

Mack