The r_wg index by James, Demaree, & Wolf (1984, 1993)

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

The r_wg index by James, Demaree, & Wolf (1984, 1993)

E. Bernardo
The r_wg index (James, Demaree, & Wolf, 1984, 1993) was described in Pasisz & Hurtz (2009).  Is this index available in SPSS?  What is the name of this index in SPSS?
 
Pasisz, D. J., & Hurtz, G. M. (2009). Testing for between-group differences in within-group interrater agreement. Organizational Research Methods, 12, 590-613.
 
James, R. L., Demaree, R. G., & Wolf, G. (1984). Estimating within-group interrater reliability with and without

response bias. Journal of Applied Psychology, 69, 85–98.

 

James, R. L., Demaree, R. G., & Wolf, G. (1993). rwg: An assessment of within-group interrater agreement.

Journal of Applied Psychology, 78, 306–309.

 

Reply | Threaded
Open this post in threaded view
|

Re: The r_wg index by James, Demaree, & Wolf (1984, 1993)

Hurtz, Gregory M
Eins,
r(wg) is not computed directly in SPSS but it's easy to compute from the variance across raters, which SPSS easily gives.
 
In simple notation:
 
r(wg) = 1 - (V/E)
 
Where V is the variance among raters, and E is a user-defined error term. In common practice, the expected uniform variance is used to define error as "random ratings":
 
E = (A^2 - 1) / 12
 
I assume this inquiry is related to your previous posting where you had a 4-point scale. So your error term would be:
 
(4^2 - 1)/12 = (16-1)/12 = 1.25
 
All you have to do then is compute the variance among your raters (V), divide it by 1.25 (E), and subtract from 1. If your data file is set up with raters as separate variables/columns, for example with the names:
 
rater1, rater2, rater3, ....rater10
 
then it's pretty simple to get r(wg) through a syntax statement:
 
COMPUTE rwg = 1 - (variance(rater1 to rater10)/1.25).
EXECUTE.
 
Of course if you want to use another error term (E), for example following James et al.'s scenarios where you can model rater biases, then just plug in the error value in place of 1.25.
 
Note also that some object to the "looseness" of a user-defined error term. Recently an alternative index called a(wg) was created, that seems to have a stronger tie to the original principles behind Cohen's kappa (although each statistic is applicable to a different situation). In a(wg), the error term is always the maximum possible variance that could occur at the rater's mean on the fixed response scale. So, as the rater mean approaches the ceiling (4) or floor (1) of your rating scale, variance is restricted, and the error term is therefore adjusted. However, if you're still working on your problem of comparing agreement across two groups, I'm not exactly sure how to do a direct statistical comparison of two a(wg) values unless the means for the two groups are identical so that their error terms don't change.
 
Anyhow, hope this helps get you to your next stage of analysis.
 
The reference for a(wg) is:
 
Brown, R. D., & Hauenstein, N. M. A. (2005). Interrater agreement reconsidered: An alternative to the rwg indices. Organizational Research Methods, 8(2), 165-184.
 

Greg Hurtz, Ph.D.
Associate Professor
Industrial & Organizational Psychology
California State University, Sacramento
http://www.csus.edu/indiv/h/hurtzg
 

From: SPSSX(r) Discussion [[hidden email]] On Behalf Of Eins Bernardo [[hidden email]]
Sent: Wednesday, August 18, 2010 8:14 PM
To: [hidden email]
Subject: The r_wg index by James, Demaree, & Wolf (1984, 1993)

The r_wg index (James, Demaree, & Wolf, 1984, 1993) was described in Pasisz & Hurtz (2009).  Is this index available in SPSS?  What is the name of this index in SPSS?
 
Pasisz, D. J., & Hurtz, G. M. (2009). Testing for between-group differences in within-group interrater agreement. Organizational Research Methods, 12, 590-613.
 
James, R. L., Demaree, R. G., & Wolf, G. (1984). Estimating within-group interrater reliability with and without

response bias. Journal of Applied Psychology, 69, 85–98.

 

James, R. L., Demaree, R. G., & Wolf, G. (1993). rwg: An assessment of within-group interrater agreement.

Journal of Applied Psychology, 78, 306–309.

 

Reply | Threaded
Open this post in threaded view
|

Re: The r_wg index by James, Demaree, & Wolf (1984, 1993)

Bruce Weaver
Administrator
Hurtz, Gregory M wrote
Eins,
r(wg) is not computed directly in SPSS but it's easy to compute from the variance across raters, which SPSS easily gives.

In simple notation:

r(wg) = 1 - (V/E)

Where V is the variance among raters, and E is a user-defined error term. In common practice, the expected uniform variance is used to define error as "random ratings":

E = (A^2 - 1) / 12

I assume this inquiry is related to your previous posting where you had a 4-point scale. So your error term would be:

(4^2 - 1)/12 = (16-1)/12 = 1.25

All you have to do then is compute the variance among your raters (V), divide it by 1.25 (E), and subtract from 1. If your data file is set up with raters as separate variables/columns, for example with the names:

rater1, rater2, rater3, ....rater10

then it's pretty simple to get r(wg) through a syntax statement:

COMPUTE rwg = 1 - (variance(rater1 to rater10)/1.25).
EXECUTE.

--- snip the rest ---
Hi Greg.  What you're suggesting here will compute a value of r(wg) on each row of the data file.  Don't you need to compute the variance of the rater means?  If so, just throw in an AGGREGATE first to obtain a mean for each rater.

HTH.

--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).