For a single continuous score, you get measures of
both similarity and difference by using a paired t-test,
and looking at both the t-test and the correlation.
The "intraclass correlation" (ICC) is especially uninteresting
in the two-rater case.
For categorical ratings, the kappa has very little potential
for generalizing across studies for any more than 2 categories.
That is, if you have a slew of parallel ratings, you could have
a use for multi-category kappas; otherwise, not. It can be
useful to re-examine a table with multiple categories as
several 2x2 tables with Yes/No for the single categories.
The 2x2 kappa gives a measure of association, and the
corresponding test for difference is "McNemar's test for
changes" -- That one compares the number of YN responses
to the number of NY responses.
--
Rich Ulrich
> Date: Tue, 15 May 2012 02:47:13 -0700
> From:
[hidden email]> Subject: How to analze data?
> To:
[hidden email]>
> Hi!
>
> I have two rater's score and i want to know how to check for inter-rater
> reliability using SPSS
>
> and i also would like to know how to do Cohen Kappa analysis.
>
> Thank you very much.
>
> Sathia
>
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/How-to-analze-data-tp5709894.html
> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
>
[hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD