Measures of agreement for individuals categories when the categories are ordinal

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Measures of agreement for individuals categories when the categories are ordinal

Margaret MacDougall
Dear all

  Apologies for cross-posting

  I have been using an intra-class correlation coefficient to analyse my data which is on an ordinal scale from 1 to 7. The analysis involves a two-way mixed effects model in which overall absolute agreement is being measured. I would like to complement the results to date with further results relating to the level of agreement for each category individually (under the assumption that there are two raters).  As I understand from my reading, there are a number of definitions of Kappa statistics which allow for the assessment of chance-corrected inter-rater agreement over grade A only, say.  However, it appears that the related calculations involve the assumption that there are only two categories (in the above example: 'grade A' or 'other grade').  The generalization 'other grade' removes the capacity to assess the extent to which individual examiners disagree on an ordinal scale when one examiner assings the grade A but the other does not.  I wonder therefore if anyone is
 aware of alternative chance-corrected approaches to assessing agreement between two raters for a single category whereby whenever the raters disagree, the extent of disagreement is taken into consideration.

  I look forward to being educated!

  Best wishes

  Margaret




---------------------------------
 Try the all-new Yahoo! Mail . "The New Version is radically easier to use" – The Wall Street Journal
Reply | Threaded
Open this post in threaded view
|

Re: Measures of agreement for individuals categories when the categories are ordinal

Marta García-Granero
Hi Margaret

There is a good freeware program called Kappa.exe (from PEPI 4.0
collection of DOS programs) that will compute kappa for ordinal
scales.

http://www.sagebrushpress.com/pepibook.html

HTH,

Marta

MM>   I have been using an intra-class correlation coefficient to
MM> analyse my data which is on an ordinal scale from 1 to 7. The
MM> analysis involves a two-way mixed effects model in which overall
MM> absolute agreement is being measured. I would like to complement
MM> the results to date with further results relating to the level of
MM> agreement for each category individually (under the assumption
MM> that there are two raters).  As I understand from my reading,
MM> there are a number of definitions of Kappa statistics which allow
MM> for the assessment of chance-corrected inter-rater agreement over
MM> grade A only, say.  However, it appears that the related
MM> calculations involve the assumption that there are only two
MM> categories (in the above example: 'grade A' or 'other grade').
MM> The generalization 'other grade' removes the capacity to assess
MM> the extent to which individual examiners disagree on an ordinal
MM> scale when one examiner assings the grade A but the other does
MM> not.  I wonder therefore if anyone is
MM>  aware of alternative chance-corrected approaches to
MM> assessing agreement between two raters for a single category
MM> whereby whenever the raters disagree, the extent of disagreement
MM> is taken into consideration.

Regards,
Marta
Reply | Threaded
Open this post in threaded view
|

Re: Measures of agreement for individuals categories when the categories are ordinal

Margaret MacDougall
Dear Marta

  Thank you for this kind reply.  Having had a brief look at how the relevant program works, I am somewhat discouraged by the fact that I am required to enter individual scores by hand in order to obtain my results. The data is currently in an SPSS spreadhseet and there are 1718 entries.

  Best wishes

  Margaret

Marta García-Granero <[hidden email]> wrote:
  Hi Margaret

There is a good freeware program called Kappa.exe (from PEPI 4.0
collection of DOS programs) that will compute kappa for ordinal
scales.

http://www.sagebrushpress.com/pepibook.html

HTH,

Marta

MM> I have been using an intra-class correlation coefficient to
MM> analyse my data which is on an ordinal scale from 1 to 7. The
MM> analysis involves a two-way mixed effects model in which overall
MM> absolute agreement is being measured. I would like to complement
MM> the results to date with further results relating to the level of
MM> agreement for each category individually (under the assumption
MM> that there are two raters). As I understand from my reading,
MM> there are a number of definitions of Kappa statistics which allow
MM> for the assessment of chance-corrected inter-rater agreement over
MM> grade A only, say. However, it appears that the related
MM> calculations involve the assumption that there are only two
MM> categories (in the above example: 'grade A' or 'other grade').
MM> The generalization 'other grade' removes the capacity to assess
MM> the extent to which individual examiners disagree on an ordinal
MM> scale when one examiner assings the grade A but the other does
MM> not. I wonder therefore if anyone is
MM> aware of alternative chance-corrected approaches to
MM> assessing agreement between two raters for a single category
MM> whereby whenever the raters disagree, the extent of disagreement
MM> is taken into consideration.

Regards,
Marta



---------------------------------
 All New Yahoo! Mail – Tired of Vi@gr@! come-ons? Let our SpamGuard protect you.