|
Good morning
I would apreciate any suggestion on the following:
For a sample of 1400 compositions, each rated on a scale 0-15 by two
raters randomly chosen from a pool of 120, would you say that ICC,
Pearson's, or Cohen's k (after categorising the grades) would be an
apporopriate method for checking interrater reliability?
The current layout is:
rows-->subjects, column1-->raterA, column2-->raterB
Many thanks
Vassilis
|