statistical test if raters not independent of each other

classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

statistical test if raters not independent of each other

J McClure
Hi,
Participants in my study completed a survey of suicidal thoughts and
behaviors.  Then a nurse (not involved in the study) interviewed the
participant and interviewed the patient and based on that completed a
non research form about suicidal thoughts and behaviors.  Then, prior to
the doctor interviewing the patient, nurse gave the doctor a verbal
report of the patient's suicidal thoughts and behaviors. The doctor then
wrote a clinical note which included his/her assessment of suicidal
thoughts and behaviors. (When I designed and started the study I did not
know the nurse gave the doctor a report prior to the doctor's interview
of the patient).
I created a summary variable for the suicidal thoughts and behaviors,
SRisk. It has 5 categories (none, passive, active, plan, plan and
preparation). There is a summary variable for the participant (based on
the survey results), the nurse (based on their completion of a clinical
form), and the doctor (based on their clinical note).
I started by using kappa and just looked at pairwise comparisons:
participant vs. nurse, participant vs. doctor, and doctor vs. nurse.
I realized however that the doctor and nurse are not independent since
the nurse gives the doctor a verbal report of his/her findings prior to
the doctor interviewing the patient.
Are there any tests that would look at the nurse vs. doctor agreement?
If not, I'll leave that out of my analysis.
Thanks for any ideas,
Jan

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: statistical test if raters not independent of each other

Mike
If I understand what you say below correctly, you have a situation
with two sources of agreement (more generally association):

(A)  Agreement/association due to nurse's communication with
the doctor

and

(B)  Agreement between doctor and nurse based on independent
observation of patient

If you had some doctors who had not communicated with the
nurses before observing the patients, you might be able to
estimate how much agreement is due to (B) alone.  If (B) alone
were not significantly different from the (A) + (B) situation,
then you might be able to argue that the nurse's report had
no impact (i.e., doctors effectively ignored what the nurses toldl
them).  However, on the basis of the anchoring and adjustment
heuristic, it is likely that the doctor's response was influenced
by the nurse's report.  So, the doctor's response is confounded
with the nurse's response and there doesn't appear to be anyway
to unconfound them unless you're able to get additional nurses
and doctors to independently assess patients.

-Mike Palij
New York University
[hidden email]



----- Original Message -----
From: "J McClure" <[hidden email]>
To: <[hidden email]>
Sent: Friday, November 12, 2010 5:14 PM
Subject: statistical test if raters not independent of each other


> Hi,
> Participants in my study completed a survey of suicidal thoughts and
> behaviors.  Then a nurse (not involved in the study) interviewed the
> participant and interviewed the patient and based on that completed a
> non research form about suicidal thoughts and behaviors.  Then, prior to
> the doctor interviewing the patient, nurse gave the doctor a verbal
> report of the patient's suicidal thoughts and behaviors. The doctor then
> wrote a clinical note which included his/her assessment of suicidal
> thoughts and behaviors. (When I designed and started the study I did not
> know the nurse gave the doctor a report prior to the doctor's interview
> of the patient).
> I created a summary variable for the suicidal thoughts and behaviors,
> SRisk. It has 5 categories (none, passive, active, plan, plan and
> preparation). There is a summary variable for the participant (based on
> the survey results), the nurse (based on their completion of a clinical
> form), and the doctor (based on their clinical note).
> I started by using kappa and just looked at pairwise comparisons:
> participant vs. nurse, participant vs. doctor, and doctor vs. nurse.
> I realized however that the doctor and nurse are not independent since
> the nurse gives the doctor a verbal report of his/her findings prior to
> the doctor interviewing the patient.
> Are there any tests that would look at the nurse vs. doctor agreement?
> If not, I'll leave that out of my analysis.
> Thanks for any ideas,
> Jan
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: statistical test if raters not independent of each other

J McClure
Thanks Mike.  I do have about 60 participants that never saw the nurse
and I have quite a few where they saw the nurse but no doctor.
At this point I have excluded both sets of participants from the
analysis. Can you suggest any type of analysis where I could use them.
I don't have any where I know the nurse did not communicate with the
doctor.
(The kappa for participant vs. MD is .133 and participant vs. nurse
.047. For nurse vs. doctor it's .388!)
Thanks!
Jan

On 11/12/2010 3:55 PM, Mike Palij wrote:

> If I understand what you say below correctly, you have a situation
> with two sources of agreement (more generally association):
>
> (A)  Agreement/association due to nurse's communication with
> the doctor
>
> and
>
> (B)  Agreement between doctor and nurse based on independent
> observation of patient
>
> If you had some doctors who had not communicated with the
> nurses before observing the patients, you might be able to
> estimate how much agreement is due to (B) alone.  If (B) alone
> were not significantly different from the (A) + (B) situation,
> then you might be able to argue that the nurse's report had
> no impact (i.e., doctors effectively ignored what the nurses toldl
> them).  However, on the basis of the anchoring and adjustment
> heuristic, it is likely that the doctor's response was influenced
> by the nurse's report.  So, the doctor's response is confounded
> with the nurse's response and there doesn't appear to be anyway
> to unconfound them unless you're able to get additional nurses
> and doctors to independently assess patients.
>
> -Mike Palij
> New York University
> [hidden email]
>
>
>
> ----- Original Message -----
> From: "J McClure"<[hidden email]>
> To:<[hidden email]>
> Sent: Friday, November 12, 2010 5:14 PM
> Subject: statistical test if raters not independent of each other
>
>
>> Hi,
>> Participants in my study completed a survey of suicidal thoughts and
>> behaviors.  Then a nurse (not involved in the study) interviewed the
>> participant and interviewed the patient and based on that completed a
>> non research form about suicidal thoughts and behaviors.  Then, prior to
>> the doctor interviewing the patient, nurse gave the doctor a verbal
>> report of the patient's suicidal thoughts and behaviors. The doctor then
>> wrote a clinical note which included his/her assessment of suicidal
>> thoughts and behaviors. (When I designed and started the study I did not
>> know the nurse gave the doctor a report prior to the doctor's interview
>> of the patient).
>> I created a summary variable for the suicidal thoughts and behaviors,
>> SRisk. It has 5 categories (none, passive, active, plan, plan and
>> preparation). There is a summary variable for the participant (based on
>> the survey results), the nurse (based on their completion of a clinical
>> form), and the doctor (based on their clinical note).
>> I started by using kappa and just looked at pairwise comparisons:
>> participant vs. nurse, participant vs. doctor, and doctor vs. nurse.
>> I realized however that the doctor and nurse are not independent since
>> the nurse gives the doctor a verbal report of his/her findings prior to
>> the doctor interviewing the patient.
>> Are there any tests that would look at the nurse vs. doctor agreement?
>> If not, I'll leave that out of my analysis.
>> Thanks for any ideas,
>> Jan
>>
>> =====================
>> To manage your subscription to SPSSX-L, send a message to
>> [hidden email] (not to SPSSX-L), with no body text except the
>> command. To leave the list, send the command
>> SIGNOFF SPSSX-L
>> For a list of commands to manage subscriptions, send the command
>> INFO REFCARD
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: statistical test if raters not independent of each other

Mike
----- Original Message -----
From: "J McClure" <[hidden email]>
To: <[hidden email]>
Sent: Friday, November 12, 2010 7:37 PM
Subject: Re: statistical test if raters not independent of each other


> Thanks Mike.  I do have about 60 participants that never saw the nurse
> and I have quite a few where they saw the nurse but no doctor.
> At this point I have excluded both sets of participants from the
> analysis. Can you suggest any type of analysis where I could use them.
> I don't have any where I know the nurse did not communicate with the
> doctor.

At first glance, I don't have any good ideas but this might take
some time to think through.  Others may see some method of attack
that makes use of all of the information.

> (The kappa for participant vs. MD is .133 and participant vs. nurse
> .047. For nurse vs. doctor it's .388!)

It would be nice to have confidence intervals for these kappas.
I'm willing to bet that kappa(nurse,partient) is not significant (i.e.,
interval contains zero).  I'm less sure about kappa(doctor,patient)
but its value is not confidence inspiring.  It probably shouldn't
come as a surprise that kappa(doctor,nurse) is much higher
but that may be due to having two sources of agreement/association.
I think the question is whether one can decompose kappa(doctor,nurse)
given kappa(doctor,patient) and kappa(nurse,patient).

-Mike Palij
New York University
[hidden email]


> Thanks!
> Jan
>
> On 11/12/2010 3:55 PM, Mike Palij wrote:
>> If I understand what you say below correctly, you have a situation
>> with two sources of agreement (more generally association):
>>
>> (A)  Agreement/association due to nurse's communication with
>> the doctor
>>
>> and
>>
>> (B)  Agreement between doctor and nurse based on independent
>> observation of patient
>>
>> If you had some doctors who had not communicated with the
>> nurses before observing the patients, you might be able to
>> estimate how much agreement is due to (B) alone.  If (B) alone
>> were not significantly different from the (A) + (B) situation,
>> then you might be able to argue that the nurse's report had
>> no impact (i.e., doctors effectively ignored what the nurses toldl
>> them).  However, on the basis of the anchoring and adjustment
>> heuristic, it is likely that the doctor's response was influenced
>> by the nurse's report.  So, the doctor's response is confounded
>> with the nurse's response and there doesn't appear to be anyway
>> to unconfound them unless you're able to get additional nurses
>> and doctors to independently assess patients.
>>
>> -Mike Palij
>> New York University
>> [hidden email]
>>
>>
>>
>> ----- Original Message -----
>> From: "J McClure"<[hidden email]>
>> To:<[hidden email]>
>> Sent: Friday, November 12, 2010 5:14 PM
>> Subject: statistical test if raters not independent of each other
>>
>>
>>> Hi,
>>> Participants in my study completed a survey of suicidal thoughts and
>>> behaviors.  Then a nurse (not involved in the study) interviewed the
>>> participant and interviewed the patient and based on that completed a
>>> non research form about suicidal thoughts and behaviors.  Then, prior to
>>> the doctor interviewing the patient, nurse gave the doctor a verbal
>>> report of the patient's suicidal thoughts and behaviors. The doctor then
>>> wrote a clinical note which included his/her assessment of suicidal
>>> thoughts and behaviors. (When I designed and started the study I did not
>>> know the nurse gave the doctor a report prior to the doctor's interview
>>> of the patient).
>>> I created a summary variable for the suicidal thoughts and behaviors,
>>> SRisk. It has 5 categories (none, passive, active, plan, plan and
>>> preparation). There is a summary variable for the participant (based on
>>> the survey results), the nurse (based on their completion of a clinical
>>> form), and the doctor (based on their clinical note).
>>> I started by using kappa and just looked at pairwise comparisons:
>>> participant vs. nurse, participant vs. doctor, and doctor vs. nurse.
>>> I realized however that the doctor and nurse are not independent since
>>> the nurse gives the doctor a verbal report of his/her findings prior to
>>> the doctor interviewing the patient.
>>> Are there any tests that would look at the nurse vs. doctor agreement?
>>> If not, I'll leave that out of my analysis.
>>> Thanks for any ideas,
>>> Jan
>>>
>>> =====================
>>> To manage your subscription to SPSSX-L, send a message to
>>> [hidden email] (not to SPSSX-L), with no body text except the
>>> command. To leave the list, send the command
>>> SIGNOFF SPSSX-L
>>> For a list of commands to manage subscriptions, send the command
>>> INFO REFCARD
>> =====================
>> To manage your subscription to SPSSX-L, send a message to
>> [hidden email] (not to SPSSX-L), with no body text except the
>> command. To leave the list, send the command
>> SIGNOFF SPSSX-L
>> For a list of commands to manage subscriptions, send the command
>> INFO REFCARD
>>
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: statistical test if raters not independent of each other

Bruce Weaver
Administrator
In reply to this post by J McClure
The 5 categories (none, passive, active, plan, plan and preparation) appear to be ordinal, so weighted kappa could be computed rather than kappa.  And it will almost certainly show better agreement.

Also, weighted kappa (with quadratic) weights is equivalent to the most common form of intra-class correlation, so you can just compute the ICC (via RELIABILITY), and call it weighted kappa if that's what will work better for your intended audience or readership.  If you need a reference, check out Biostatistics - The Bare Essentials (by Norman & Streiner).  I believe you can find it via Google Books.  IIRC, they discuss this issue in the chapter on repeated measures ANOVA.  

Finally, you posted another message asking about confidence intervals. If you compute the ICC via RELIABILITY, it will give you a 95% CI.

HTH.


J McClure wrote
Thanks Mike.  I do have about 60 participants that never saw the nurse
and I have quite a few where they saw the nurse but no doctor.
At this point I have excluded both sets of participants from the
analysis. Can you suggest any type of analysis where I could use them.
I don't have any where I know the nurse did not communicate with the
doctor.
(The kappa for participant vs. MD is .133 and participant vs. nurse
.047. For nurse vs. doctor it's .388!)
Thanks!
Jan

On 11/12/2010 3:55 PM, Mike Palij wrote:
> If I understand what you say below correctly, you have a situation
> with two sources of agreement (more generally association):
>
> (A)  Agreement/association due to nurse's communication with
> the doctor
>
> and
>
> (B)  Agreement between doctor and nurse based on independent
> observation of patient
>
> If you had some doctors who had not communicated with the
> nurses before observing the patients, you might be able to
> estimate how much agreement is due to (B) alone.  If (B) alone
> were not significantly different from the (A) + (B) situation,
> then you might be able to argue that the nurse's report had
> no impact (i.e., doctors effectively ignored what the nurses toldl
> them).  However, on the basis of the anchoring and adjustment
> heuristic, it is likely that the doctor's response was influenced
> by the nurse's report.  So, the doctor's response is confounded
> with the nurse's response and there doesn't appear to be anyway
> to unconfound them unless you're able to get additional nurses
> and doctors to independently assess patients.
>
> -Mike Palij
> New York University
> mp26@nyu.edu
>
>
>
> ----- Original Message -----
> From: "J McClure"<mc006@pacbell.net>
> To:<SPSSX-L@LISTSERV.UGA.EDU>
> Sent: Friday, November 12, 2010 5:14 PM
> Subject: statistical test if raters not independent of each other
>
>
>> Hi,
>> Participants in my study completed a survey of suicidal thoughts and
>> behaviors.  Then a nurse (not involved in the study) interviewed the
>> participant and interviewed the patient and based on that completed a
>> non research form about suicidal thoughts and behaviors.  Then, prior to
>> the doctor interviewing the patient, nurse gave the doctor a verbal
>> report of the patient's suicidal thoughts and behaviors. The doctor then
>> wrote a clinical note which included his/her assessment of suicidal
>> thoughts and behaviors. (When I designed and started the study I did not
>> know the nurse gave the doctor a report prior to the doctor's interview
>> of the patient).
>> I created a summary variable for the suicidal thoughts and behaviors,
>> SRisk. It has 5 categories (none, passive, active, plan, plan and
>> preparation). There is a summary variable for the participant (based on
>> the survey results), the nurse (based on their completion of a clinical
>> form), and the doctor (based on their clinical note).
>> I started by using kappa and just looked at pairwise comparisons:
>> participant vs. nurse, participant vs. doctor, and doctor vs. nurse.
>> I realized however that the doctor and nurse are not independent since
>> the nurse gives the doctor a verbal report of his/her findings prior to
>> the doctor interviewing the patient.
>> Are there any tests that would look at the nurse vs. doctor agreement?
>> If not, I'll leave that out of my analysis.
>> Thanks for any ideas,
>> Jan
>>
>> =====================
>> To manage your subscription to SPSSX-L, send a message to
>> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
>> command. To leave the list, send the command
>> SIGNOFF SPSSX-L
>> For a list of commands to manage subscriptions, send the command
>> INFO REFCARD
> =====================
> To manage your subscription to SPSSX-L, send a message to
> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>

=====================
To manage your subscription to SPSSX-L, send a message to
LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: statistical test if raters not independent of each other

Ryan
If the categories are ordinal, then the OP might consider computing an intraclass correlation coefficient (ICC) via the MIXED procedure. Fitting a linear mixed model (LMM) allows one to compute an ICC after decomposing the variance from various sources.  I haven't followed this thread closely enough to state unequivocally that an LMM would do the trick for this particular design, but based on what I've read so far, it seems like an option to consider.
 
Ryan
On Sat, Nov 13, 2010 at 3:02 PM, Bruce Weaver <[hidden email]> wrote:
The 5 categories (none, passive, active, plan, plan and preparation) appear
to be ordinal, so weighted kappa could be computed rather than kappa.  And
it will almost certainly show better agreement.

Also, weighted kappa (with quadratic) weights is equivalent to the most
common form of intra-class correlation, so you can just compute the ICC (via
RELIABILITY), and call it weighted kappa if that's what will work better for
your intended audience or readership.  If you need a reference, check out
Biostatistics - The Bare Essentials (by Norman & Streiner).  I believe you
can find it via Google Books.  IIRC, they discuss this issue in the chapter
on repeated measures ANOVA.

Finally, you posted another message asking about confidence intervals. If
you compute the ICC via RELIABILITY, it will give you a 95% CI.

HTH.



J McClure wrote:
>
> Thanks Mike.  I do have about 60 participants that never saw the nurse
> and I have quite a few where they saw the nurse but no doctor.
> At this point I have excluded both sets of participants from the
> analysis. Can you suggest any type of analysis where I could use them.
> I don't have any where I know the nurse did not communicate with the
> doctor.
> (The kappa for participant vs. MD is .133 and participant vs. nurse
> .047. For nurse vs. doctor it's .388!)
> Thanks!
> Jan
>
> On 11/12/2010 3:55 PM, Mike Palij wrote:
>> If I understand what you say below correctly, you have a situation
>> with two sources of agreement (more generally association):
>>
>> (A)  Agreement/association due to nurse's communication with
>> the doctor
>>
>> and
>>
>> (B)  Agreement between doctor and nurse based on independent
>> observation of patient
>>
>> If you had some doctors who had not communicated with the
>> nurses before observing the patients, you might be able to
>> estimate how much agreement is due to (B) alone.  If (B) alone
>> were not significantly different from the (A) + (B) situation,
>> then you might be able to argue that the nurse's report had
>> no impact (i.e., doctors effectively ignored what the nurses toldl
>> them).  However, on the basis of the anchoring and adjustment
>> heuristic, it is likely that the doctor's response was influenced
>> by the nurse's report.  So, the doctor's response is confounded
>> with the nurse's response and there doesn't appear to be anyway
>> to unconfound them unless you're able to get additional nurses
>> and doctors to independently assess patients.
>>
>> -Mike Palij
>> New York University
>> [hidden email]
>>
>>
>>
>> ----- Original Message -----
>> From: "J McClure"<[hidden email]>
>> To:<[hidden email]>
>> Sent: Friday, November 12, 2010 5:14 PM
>> Subject: statistical test if raters not independent of each other
>>
>>
>>> Hi,
>>> Participants in my study completed a survey of suicidal thoughts and
>>> behaviors.  Then a nurse (not involved in the study) interviewed the
>>> participant and interviewed the patient and based on that completed a
>>> non research form about suicidal thoughts and behaviors.  Then, prior to
>>> the doctor interviewing the patient, nurse gave the doctor a verbal
>>> report of the patient's suicidal thoughts and behaviors. The doctor then
>>> wrote a clinical note which included his/her assessment of suicidal
>>> thoughts and behaviors. (When I designed and started the study I did not
>>> know the nurse gave the doctor a report prior to the doctor's interview
>>> of the patient).
>>> I created a summary variable for the suicidal thoughts and behaviors,
>>> SRisk. It has 5 categories (none, passive, active, plan, plan and
>>> preparation). There is a summary variable for the participant (based on
>>> the survey results), the nurse (based on their completion of a clinical
>>> form), and the doctor (based on their clinical note).
>>> I started by using kappa and just looked at pairwise comparisons:
>>> participant vs. nurse, participant vs. doctor, and doctor vs. nurse.
>>> I realized however that the doctor and nurse are not independent since
>>> the nurse gives the doctor a verbal report of his/her findings prior to
>>> the doctor interviewing the patient.
>>> Are there any tests that would look at the nurse vs. doctor agreement?
>>> If not, I'll leave that out of my analysis.
>>> Thanks for any ideas,
>>> Jan
>>>
>>> =====================
>>> To manage your subscription to SPSSX-L, send a message to
>>> [hidden email] (not to SPSSX-L), with no body text except the
>>> command. To leave the list, send the command
>>> SIGNOFF SPSSX-L
>>> For a list of commands to manage subscriptions, send the command
>>> INFO REFCARD
>> =====================
>> To manage your subscription to SPSSX-L, send a message to
>> [hidden email] (not to SPSSX-L), with no body text except the
>> command. To leave the list, send the command
>> SIGNOFF SPSSX-L
>> For a list of commands to manage subscriptions, send the command
>> INFO REFCARD
>>
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
>


-----
--
Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

NOTE: My Hotmail account is not monitored regularly.
To send me an e-mail, please use the address shown above.

--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/statistical-test-if-raters-not-independent-of-each-other-tp3262985p3263887.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

Reply | Threaded
Open this post in threaded view
|

Re: statistical test if raters not independent of each other

Bruce Weaver
Administrator
Hi Ryan.  If there are no missing data, I don't see any great advantage to using MIXED.  If you use RELIABILITY, the ICC and it's 95% CI are reported in the output.  I don't think that is so with MIXED, is it?  I believe you have to do your own computations using variance components.

Bruce

R B wrote
If the categories are ordinal, then the OP might consider computing
an intraclass correlation coefficient (ICC) via the MIXED procedure. Fitting
a linear mixed model (LMM) allows one to compute an ICC after decomposing
the variance from various sources.  I haven't followed this thread closely
enough to state unequivocally that an LMM would do the trick for this
particular design, but based on what I've read so far, it seems like an
option to consider.

Ryan
On Sat, Nov 13, 2010 at 3:02 PM, Bruce Weaver <bruce.weaver@hotmail.com>wrote:

> The 5 categories (none, passive, active, plan, plan and preparation) appear
> to be ordinal, so weighted kappa could be computed rather than kappa.  And
> it will almost certainly show better agreement.
>
> Also, weighted kappa (with quadratic) weights is equivalent to the most
> common form of intra-class correlation, so you can just compute the ICC
> (via
> RELIABILITY), and call it weighted kappa if that's what will work better
> for
> your intended audience or readership.  If you need a reference, check out
> Biostatistics - The Bare Essentials (by Norman & Streiner).  I believe you
> can find it via Google Books.  IIRC, they discuss this issue in the chapter
> on repeated measures ANOVA.
>
> Finally, you posted another message asking about confidence intervals. If
> you compute the ICC via RELIABILITY, it will give you a 95% CI.
>
> HTH.
>
>
>
> J McClure wrote:
> >
> > Thanks Mike.  I do have about 60 participants that never saw the nurse
> > and I have quite a few where they saw the nurse but no doctor.
> > At this point I have excluded both sets of participants from the
> > analysis. Can you suggest any type of analysis where I could use them.
> > I don't have any where I know the nurse did not communicate with the
> > doctor.
> > (The kappa for participant vs. MD is .133 and participant vs. nurse
> > .047. For nurse vs. doctor it's .388!)
> > Thanks!
> > Jan
> >
> > On 11/12/2010 3:55 PM, Mike Palij wrote:
> >> If I understand what you say below correctly, you have a situation
> >> with two sources of agreement (more generally association):
> >>
> >> (A)  Agreement/association due to nurse's communication with
> >> the doctor
> >>
> >> and
> >>
> >> (B)  Agreement between doctor and nurse based on independent
> >> observation of patient
> >>
> >> If you had some doctors who had not communicated with the
> >> nurses before observing the patients, you might be able to
> >> estimate how much agreement is due to (B) alone.  If (B) alone
> >> were not significantly different from the (A) + (B) situation,
> >> then you might be able to argue that the nurse's report had
> >> no impact (i.e., doctors effectively ignored what the nurses toldl
> >> them).  However, on the basis of the anchoring and adjustment
> >> heuristic, it is likely that the doctor's response was influenced
> >> by the nurse's report.  So, the doctor's response is confounded
> >> with the nurse's response and there doesn't appear to be anyway
> >> to unconfound them unless you're able to get additional nurses
> >> and doctors to independently assess patients.
> >>
> >> -Mike Palij
> >> New York University
> >> mp26@nyu.edu
> >>
> >>
> >>
> >> ----- Original Message -----
> >> From: "J McClure"<mc006@pacbell.net>
> >> To:<SPSSX-L@LISTSERV.UGA.EDU>
> >> Sent: Friday, November 12, 2010 5:14 PM
> >> Subject: statistical test if raters not independent of each other
> >>
> >>
> >>> Hi,
> >>> Participants in my study completed a survey of suicidal thoughts and
> >>> behaviors.  Then a nurse (not involved in the study) interviewed the
> >>> participant and interviewed the patient and based on that completed a
> >>> non research form about suicidal thoughts and behaviors.  Then, prior
> to
> >>> the doctor interviewing the patient, nurse gave the doctor a verbal
> >>> report of the patient's suicidal thoughts and behaviors. The doctor
> then
> >>> wrote a clinical note which included his/her assessment of suicidal
> >>> thoughts and behaviors. (When I designed and started the study I did
> not
> >>> know the nurse gave the doctor a report prior to the doctor's interview
> >>> of the patient).
> >>> I created a summary variable for the suicidal thoughts and behaviors,
> >>> SRisk. It has 5 categories (none, passive, active, plan, plan and
> >>> preparation). There is a summary variable for the participant (based on
> >>> the survey results), the nurse (based on their completion of a clinical
> >>> form), and the doctor (based on their clinical note).
> >>> I started by using kappa and just looked at pairwise comparisons:
> >>> participant vs. nurse, participant vs. doctor, and doctor vs. nurse.
> >>> I realized however that the doctor and nurse are not independent since
> >>> the nurse gives the doctor a verbal report of his/her findings prior to
> >>> the doctor interviewing the patient.
> >>> Are there any tests that would look at the nurse vs. doctor agreement?
> >>> If not, I'll leave that out of my analysis.
> >>> Thanks for any ideas,
> >>> Jan
> >>>
> >>> =====================
> >>> To manage your subscription to SPSSX-L, send a message to
> >>> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except
> the
> >>> command. To leave the list, send the command
> >>> SIGNOFF SPSSX-L
> >>> For a list of commands to manage subscriptions, send the command
> >>> INFO REFCARD
> >> =====================
> >> To manage your subscription to SPSSX-L, send a message to
> >> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except
> the
> >> command. To leave the list, send the command
> >> SIGNOFF SPSSX-L
> >> For a list of commands to manage subscriptions, send the command
> >> INFO REFCARD
> >>
> >
> > =====================
> > To manage your subscription to SPSSX-L, send a message to
> > LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
> > command. To leave the list, send the command
> > SIGNOFF SPSSX-L
> > For a list of commands to manage subscriptions, send the command
> > INFO REFCARD
> >
> >
>
>
> -----
> --
> Bruce Weaver
> bweaver@lakeheadu.ca
> http://sites.google.com/a/lakeheadu.ca/bweaver/
>
> "When all else fails, RTFM."
>
> NOTE: My Hotmail account is not monitored regularly.
> To send me an e-mail, please use the address shown above.
>
> --
> View this message in context:
> http://spssx-discussion.1045642.n5.nabble.com/statistical-test-if-raters-not-independent-of-each-other-tp3262985p3263887.html
> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: statistical test if raters not independent of each other

Ryan
Hi, Bruce:

I'm a fan of computing the ICC via linear mixed modelling since it can
handle various scenarios (e.g., see Shrout and Fleiss, 1979) , as
suggested by a regular poster on SAS-L earlier this year.

See link for details:
http://www.listserv.uga.edu/cgi-bin/wa?A2=ind1004B&L=sas-l&P=R20678

So, I see the MIXED procedure in SAS (and SPSS) as being particularly
useful given its flexibility (e.g., raters may be considered a random
sample of all possible raters; raters may be considered the entire
population of raters; there does not need not to be a consistent
raters set...). This is not to say that RELIABILITY cannot handle
various scenarios as well. Also, I agree that calculating the ICC in
the MIXED procedure in SPSS requires some additional work, especially
if one wants the corresponding 95% confidence limits.

Ryan

On Sat, Nov 13, 2010 at 4:42 PM, Bruce Weaver <[hidden email]> wrote:

>
> Hi Ryan.  If there are no missing data, I don't see any great advantage to
> using MIXED.  If you use RELIABILITY, the ICC and it's 95% CI are reported
> in the output.  I don't think that is so with MIXED, is it?  I believe you
> have to do your own computations using variance components.
>
> Bruce
>
>
> R B wrote:
> >
> > If the categories are ordinal, then the OP might consider computing
> > an intraclass correlation coefficient (ICC) via the MIXED procedure.
> > Fitting
> > a linear mixed model (LMM) allows one to compute an ICC after decomposing
> > the variance from various sources.  I haven't followed this thread closely
> > enough to state unequivocally that an LMM would do the trick for this
> > particular design, but based on what I've read so far, it seems like an
> > option to consider.
> >
> > Ryan
> > On Sat, Nov 13, 2010 at 3:02 PM, Bruce Weaver
> > <[hidden email]>wrote:
> >
> >> The 5 categories (none, passive, active, plan, plan and preparation)
> >> appear
> >> to be ordinal, so weighted kappa could be computed rather than kappa.
> >> And
> >> it will almost certainly show better agreement.
> >>
> >> Also, weighted kappa (with quadratic) weights is equivalent to the most
> >> common form of intra-class correlation, so you can just compute the ICC
> >> (via
> >> RELIABILITY), and call it weighted kappa if that's what will work better
> >> for
> >> your intended audience or readership.  If you need a reference, check out
> >> Biostatistics - The Bare Essentials (by Norman & Streiner).  I believe
> >> you
> >> can find it via Google Books.  IIRC, they discuss this issue in the
> >> chapter
> >> on repeated measures ANOVA.
> >>
> >> Finally, you posted another message asking about confidence intervals. If
> >> you compute the ICC via RELIABILITY, it will give you a 95% CI.
> >>
> >> HTH.
> >>
> >>
> >>
> >> J McClure wrote:
> >> >
> >> > Thanks Mike.  I do have about 60 participants that never saw the nurse
> >> > and I have quite a few where they saw the nurse but no doctor.
> >> > At this point I have excluded both sets of participants from the
> >> > analysis. Can you suggest any type of analysis where I could use them.
> >> > I don't have any where I know the nurse did not communicate with the
> >> > doctor.
> >> > (The kappa for participant vs. MD is .133 and participant vs. nurse
> >> > .047. For nurse vs. doctor it's .388!)
> >> > Thanks!
> >> > Jan
> >> >
> >> > On 11/12/2010 3:55 PM, Mike Palij wrote:
> >> >> If I understand what you say below correctly, you have a situation
> >> >> with two sources of agreement (more generally association):
> >> >>
> >> >> (A)  Agreement/association due to nurse's communication with
> >> >> the doctor
> >> >>
> >> >> and
> >> >>
> >> >> (B)  Agreement between doctor and nurse based on independent
> >> >> observation of patient
> >> >>
> >> >> If you had some doctors who had not communicated with the
> >> >> nurses before observing the patients, you might be able to
> >> >> estimate how much agreement is due to (B) alone.  If (B) alone
> >> >> were not significantly different from the (A) + (B) situation,
> >> >> then you might be able to argue that the nurse's report had
> >> >> no impact (i.e., doctors effectively ignored what the nurses toldl
> >> >> them).  However, on the basis of the anchoring and adjustment
> >> >> heuristic, it is likely that the doctor's response was influenced
> >> >> by the nurse's report.  So, the doctor's response is confounded
> >> >> with the nurse's response and there doesn't appear to be anyway
> >> >> to unconfound them unless you're able to get additional nurses
> >> >> and doctors to independently assess patients.
> >> >>
> >> >> -Mike Palij
> >> >> New York University
> >> >> [hidden email]
> >> >>
> >> >>
> >> >>
> >> >> ----- Original Message -----
> >> >> From: "J McClure"<[hidden email]>
> >> >> To:<[hidden email]>
> >> >> Sent: Friday, November 12, 2010 5:14 PM
> >> >> Subject: statistical test if raters not independent of each other
> >> >>
> >> >>
> >> >>> Hi,
> >> >>> Participants in my study completed a survey of suicidal thoughts and
> >> >>> behaviors.  Then a nurse (not involved in the study) interviewed the
> >> >>> participant and interviewed the patient and based on that completed a
> >> >>> non research form about suicidal thoughts and behaviors.  Then, prior
> >> to
> >> >>> the doctor interviewing the patient, nurse gave the doctor a verbal
> >> >>> report of the patient's suicidal thoughts and behaviors. The doctor
> >> then
> >> >>> wrote a clinical note which included his/her assessment of suicidal
> >> >>> thoughts and behaviors. (When I designed and started the study I did
> >> not
> >> >>> know the nurse gave the doctor a report prior to the doctor's
> >> interview
> >> >>> of the patient).
> >> >>> I created a summary variable for the suicidal thoughts and behaviors,
> >> >>> SRisk. It has 5 categories (none, passive, active, plan, plan and
> >> >>> preparation). There is a summary variable for the participant (based
> >> on
> >> >>> the survey results), the nurse (based on their completion of a
> >> clinical
> >> >>> form), and the doctor (based on their clinical note).
> >> >>> I started by using kappa and just looked at pairwise comparisons:
> >> >>> participant vs. nurse, participant vs. doctor, and doctor vs. nurse.
> >> >>> I realized however that the doctor and nurse are not independent
> >> since
> >> >>> the nurse gives the doctor a verbal report of his/her findings prior
> >> to
> >> >>> the doctor interviewing the patient.
> >> >>> Are there any tests that would look at the nurse vs. doctor
> >> agreement?
> >> >>> If not, I'll leave that out of my analysis.
> >> >>> Thanks for any ideas,
> >> >>> Jan
> >> >>>
> >> >>> =====================
> >> >>> To manage your subscription to SPSSX-L, send a message to
> >> >>> [hidden email] (not to SPSSX-L), with no body text except
> >> the
> >> >>> command. To leave the list, send the command
> >> >>> SIGNOFF SPSSX-L
> >> >>> For a list of commands to manage subscriptions, send the command
> >> >>> INFO REFCARD
> >> >> =====================
> >> >> To manage your subscription to SPSSX-L, send a message to
> >> >> [hidden email] (not to SPSSX-L), with no body text except
> >> the
> >> >> command. To leave the list, send the command
> >> >> SIGNOFF SPSSX-L
> >> >> For a list of commands to manage subscriptions, send the command
> >> >> INFO REFCARD
> >> >>
> >> >
> >> > =====================
> >> > To manage your subscription to SPSSX-L, send a message to
> >> > [hidden email] (not to SPSSX-L), with no body text except
> >> the
> >> > command. To leave the list, send the command
> >> > SIGNOFF SPSSX-L
> >> > For a list of commands to manage subscriptions, send the command
> >> > INFO REFCARD
> >> >
> >> >
> >>
> >>
> >> -----
> >> --
> >> Bruce Weaver
> >> [hidden email]
> >> http://sites.google.com/a/lakeheadu.ca/bweaver/
> >>
> >> "When all else fails, RTFM."
> >>
> >> NOTE: My Hotmail account is not monitored regularly.
> >> To send me an e-mail, please use the address shown above.
> >>
> >> --
> >> View this message in context:
> >> http://spssx-discussion.1045642.n5.nabble.com/statistical-test-if-raters-not-independent-of-each-other-tp3262985p3263887.html
> >> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
> >>
> >> =====================
> >> To manage your subscription to SPSSX-L, send a message to
> >> [hidden email] (not to SPSSX-L), with no body text except the
> >> command. To leave the list, send the command
> >> SIGNOFF SPSSX-L
> >> For a list of commands to manage subscriptions, send the command
> >> INFO REFCARD
> >>
> >
> >
>
>
> -----
> --
> Bruce Weaver
> [hidden email]
> http://sites.google.com/a/lakeheadu.ca/bweaver/
>
> "When all else fails, RTFM."
>
> NOTE: My Hotmail account is not monitored regularly.
> To send me an e-mail, please use the address shown above.
>
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/statistical-test-if-raters-not-independent-of-each-other-tp3262985p3263964.html
> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: statistical test if raters not independent of each other

Art Kendall
If you have details of the further work perhaps you could post them and
send them to
[hidden email]

Art Kendall
Social Research Consultants

On 11/14/2010 10:10 AM, R B wrote:

> Hi, Bruce:
>
> I'm a fan of computing the ICC via linear mixed modelling since it can
> handle various scenarios (e.g., see Shrout and Fleiss, 1979) , as
> suggested by a regular poster on SAS-L earlier this year.
>
> See link for details:
> http://www.listserv.uga.edu/cgi-bin/wa?A2=ind1004B&L=sas-l&P=R20678
>
> So, I see the MIXED procedure in SAS (and SPSS) as being particularly
> useful given its flexibility (e.g., raters may be considered a random
> sample of all possible raters; raters may be considered the entire
> population of raters; there does not need not to be a consistent
> raters set...). This is not to say that RELIABILITY cannot handle
> various scenarios as well. Also, I agree that calculating the ICC in
> the MIXED procedure in SPSS requires some additional work, especially
> if one wants the corresponding 95% confidence limits.
>
> Ryan
>
> On Sat, Nov 13, 2010 at 4:42 PM, Bruce Weaver<[hidden email]>  wrote:
>> Hi Ryan.  If there are no missing data, I don't see any great advantage to
>> using MIXED.  If you use RELIABILITY, the ICC and it's 95% CI are reported
>> in the output.  I don't think that is so with MIXED, is it?  I believe you
>> have to do your own computations using variance components.
>>
>> Bruce
>>
>>
>> R B wrote:
>>> If the categories are ordinal, then the OP might consider computing
>>> an intraclass correlation coefficient (ICC) via the MIXED procedure.
>>> Fitting
>>> a linear mixed model (LMM) allows one to compute an ICC after decomposing
>>> the variance from various sources.  I haven't followed this thread closely
>>> enough to state unequivocally that an LMM would do the trick for this
>>> particular design, but based on what I've read so far, it seems like an
>>> option to consider.
>>>
>>> Ryan
>>> On Sat, Nov 13, 2010 at 3:02 PM, Bruce Weaver
>>> <[hidden email]>wrote:
>>>
>>>> The 5 categories (none, passive, active, plan, plan and preparation)
>>>> appear
>>>> to be ordinal, so weighted kappa could be computed rather than kappa.
>>>> And
>>>> it will almost certainly show better agreement.
>>>>
>>>> Also, weighted kappa (with quadratic) weights is equivalent to the most
>>>> common form of intra-class correlation, so you can just compute the ICC
>>>> (via
>>>> RELIABILITY), and call it weighted kappa if that's what will work better
>>>> for
>>>> your intended audience or readership.  If you need a reference, check out
>>>> Biostatistics - The Bare Essentials (by Norman&  Streiner).  I believe
>>>> you
>>>> can find it via Google Books.  IIRC, they discuss this issue in the
>>>> chapter
>>>> on repeated measures ANOVA.
>>>>
>>>> Finally, you posted another message asking about confidence intervals. If
>>>> you compute the ICC via RELIABILITY, it will give you a 95% CI.
>>>>
>>>> HTH.
>>>>
>>>>
>>>>
>>>> J McClure wrote:
>>>>> Thanks Mike.  I do have about 60 participants that never saw the nurse
>>>>> and I have quite a few where they saw the nurse but no doctor.
>>>>> At this point I have excluded both sets of participants from the
>>>>> analysis. Can you suggest any type of analysis where I could use them.
>>>>> I don't have any where I know the nurse did not communicate with the
>>>>> doctor.
>>>>> (The kappa for participant vs. MD is .133 and participant vs. nurse
>>>>> .047. For nurse vs. doctor it's .388!)
>>>>> Thanks!
>>>>> Jan
>>>>>
>>>>> On 11/12/2010 3:55 PM, Mike Palij wrote:
>>>>>> If I understand what you say below correctly, you have a situation
>>>>>> with two sources of agreement (more generally association):
>>>>>>
>>>>>> (A)  Agreement/association due to nurse's communication with
>>>>>> the doctor
>>>>>>
>>>>>> and
>>>>>>
>>>>>> (B)  Agreement between doctor and nurse based on independent
>>>>>> observation of patient
>>>>>>
>>>>>> If you had some doctors who had not communicated with the
>>>>>> nurses before observing the patients, you might be able to
>>>>>> estimate how much agreement is due to (B) alone.  If (B) alone
>>>>>> were not significantly different from the (A) + (B) situation,
>>>>>> then you might be able to argue that the nurse's report had
>>>>>> no impact (i.e., doctors effectively ignored what the nurses toldl
>>>>>> them).  However, on the basis of the anchoring and adjustment
>>>>>> heuristic, it is likely that the doctor's response was influenced
>>>>>> by the nurse's report.  So, the doctor's response is confounded
>>>>>> with the nurse's response and there doesn't appear to be anyway
>>>>>> to unconfound them unless you're able to get additional nurses
>>>>>> and doctors to independently assess patients.
>>>>>>
>>>>>> -Mike Palij
>>>>>> New York University
>>>>>> [hidden email]
>>>>>>
>>>>>>
>>>>>>
>>>>>> ----- Original Message -----
>>>>>> From: "J McClure"<[hidden email]>
>>>>>> To:<[hidden email]>
>>>>>> Sent: Friday, November 12, 2010 5:14 PM
>>>>>> Subject: statistical test if raters not independent of each other
>>>>>>
>>>>>>
>>>>>>> Hi,
>>>>>>> Participants in my study completed a survey of suicidal thoughts and
>>>>>>> behaviors.  Then a nurse (not involved in the study) interviewed the
>>>>>>> participant and interviewed the patient and based on that completed a
>>>>>>> non research form about suicidal thoughts and behaviors.  Then, prior
>>>> to
>>>>>>> the doctor interviewing the patient, nurse gave the doctor a verbal
>>>>>>> report of the patient's suicidal thoughts and behaviors. The doctor
>>>> then
>>>>>>> wrote a clinical note which included his/her assessment of suicidal
>>>>>>> thoughts and behaviors. (When I designed and started the study I did
>>>> not
>>>>>>> know the nurse gave the doctor a report prior to the doctor's
>>>> interview
>>>>>>> of the patient).
>>>>>>> I created a summary variable for the suicidal thoughts and behaviors,
>>>>>>> SRisk. It has 5 categories (none, passive, active, plan, plan and
>>>>>>> preparation). There is a summary variable for the participant (based
>>>> on
>>>>>>> the survey results), the nurse (based on their completion of a
>>>> clinical
>>>>>>> form), and the doctor (based on their clinical note).
>>>>>>> I started by using kappa and just looked at pairwise comparisons:
>>>>>>> participant vs. nurse, participant vs. doctor, and doctor vs. nurse.
>>>>>>> I realized however that the doctor and nurse are not independent
>>>> since
>>>>>>> the nurse gives the doctor a verbal report of his/her findings prior
>>>> to
>>>>>>> the doctor interviewing the patient.
>>>>>>> Are there any tests that would look at the nurse vs. doctor
>>>> agreement?
>>>>>>> If not, I'll leave that out of my analysis.
>>>>>>> Thanks for any ideas,
>>>>>>> Jan
>>>>>>>
>>>>>>> =====================
>>>>>>> To manage your subscription to SPSSX-L, send a message to
>>>>>>> [hidden email] (not to SPSSX-L), with no body text except
>>>> the
>>>>>>> command. To leave the list, send the command
>>>>>>> SIGNOFF SPSSX-L
>>>>>>> For a list of commands to manage subscriptions, send the command
>>>>>>> INFO REFCARD
>>>>>> =====================
>>>>>> To manage your subscription to SPSSX-L, send a message to
>>>>>> [hidden email] (not to SPSSX-L), with no body text except
>>>> the
>>>>>> command. To leave the list, send the command
>>>>>> SIGNOFF SPSSX-L
>>>>>> For a list of commands to manage subscriptions, send the command
>>>>>> INFO REFCARD
>>>>>>
>>>>> =====================
>>>>> To manage your subscription to SPSSX-L, send a message to
>>>>> [hidden email] (not to SPSSX-L), with no body text except
>>>> the
>>>>> command. To leave the list, send the command
>>>>> SIGNOFF SPSSX-L
>>>>> For a list of commands to manage subscriptions, send the command
>>>>> INFO REFCARD
>>>>>
>>>>>
>>>>
>>>> -----
>>>> --
>>>> Bruce Weaver
>>>> [hidden email]
>>>> http://sites.google.com/a/lakeheadu.ca/bweaver/
>>>>
>>>> "When all else fails, RTFM."
>>>>
>>>> NOTE: My Hotmail account is not monitored regularly.
>>>> To send me an e-mail, please use the address shown above.
>>>>
>>>> --
>>>> View this message in context:
>>>> http://spssx-discussion.1045642.n5.nabble.com/statistical-test-if-raters-not-independent-of-each-other-tp3262985p3263887.html
>>>> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>>>>
>>>> =====================
>>>> To manage your subscription to SPSSX-L, send a message to
>>>> [hidden email] (not to SPSSX-L), with no body text except the
>>>> command. To leave the list, send the command
>>>> SIGNOFF SPSSX-L
>>>> For a list of commands to manage subscriptions, send the command
>>>> INFO REFCARD
>>>>
>>>
>>
>> -----
>> --
>> Bruce Weaver
>> [hidden email]
>> http://sites.google.com/a/lakeheadu.ca/bweaver/
>>
>> "When all else fails, RTFM."
>>
>> NOTE: My Hotmail account is not monitored regularly.
>> To send me an e-mail, please use the address shown above.
>>
>> --
>> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/statistical-test-if-raters-not-independent-of-each-other-tp3262985p3263964.html
>> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>>
>> =====================
>> To manage your subscription to SPSSX-L, send a message to
>> [hidden email] (not to SPSSX-L), with no body text except the
>> command. To leave the list, send the command
>> SIGNOFF SPSSX-L
>> For a list of commands to manage subscriptions, send the command
>> INFO REFCARD
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Art Kendall
Social Research Consultants
Reply | Threaded
Open this post in threaded view
|

Re: statistical test if raters not independent of each other

J McClure
In reply to this post by Bruce Weaver
Thanks for the responses! (I come from an epidemiology background not
psychology so I have had little experience in this area)

In trying to understand how to specify the ICC I realized I'm not very
clear in several areas.
To recap: The study participant rates their suicidality on a survey
(variable is SIRISK with 5 categories), then a nurse interviews the
participant and completes a medical record template for suicide risk
(variable is RN_SI with the same 5 categories). The nurse gives the
doctor a verbal report and a doctor then interviews the participant and
in their progress note indicates degree of suicidality (MD_SI with the
same 5 categories). Each of the 280 participants are of course unique,
the nurse is any of 4 or 5 who work in this job, and the doctor varied
by  whichever psychiatric resident or attending doctor happened to be
working that day. Because the nurse gives a report to the doctor I
realized that I can't look at agreement between the doctor and nurse.
So, I am looking at agreement between the doctor and the participant,
and separately agreement between the nurse and the participant.

Considering just the doctor for the moment:
1. Is the participant (who scores their own suicidality) one of two
'judges' (the other being the doctor) both of whom are rating the same
entity (the participant) or is there only one judge, the doctor,  who's
rating is being compared to the "truth"? Or, are these the same thing?
2. To specify the ICC:
**Are the variables SIRISK  and MD_SI?
**What is the scale name? Something I make up?
**What is the basis for deciding on the model in the model subcommand?
(I am inclined to choose alpha).
**For the ICC subcommand, it seems that both the participant and the
doctor are random and if there are two 'judges' then I should specify a
RANDOM model or if only one judge then specify a ONEWAY model, or if
'item' refers to the survey question I am using for the variable SIRISK,
then item is not  random, and I should specify MIXED.
**I don't understand what TESTVAL is so I can't even make a guess.

Many thanks for any help,
Jan

On 11/13/2010 12:02 PM, Bruce Weaver wrote:

> The 5 categories (none, passive, active, plan, plan and preparation) appear
> to be ordinal, so weighted kappa could be computed rather than kappa.  And
> it will almost certainly show better agreement.
>
> Also, weighted kappa (with quadratic) weights is equivalent to the most
> common form of intra-class correlation, so you can just compute the ICC (via
> RELIABILITY), and call it weighted kappa if that's what will work better for
> your intended audience or readership.  If you need a reference, check out
> Biostatistics - The Bare Essentials (by Norman&  Streiner).  I believe you
> can find it via Google Books.  IIRC, they discuss this issue in the chapter
> on repeated measures ANOVA.
>
> Finally, you posted another message asking about confidence intervals. If
> you compute the ICC via RELIABILITY, it will give you a 95% CI.
>
> HTH.
>
>
>
> J McClure wrote:
>> Thanks Mike.  I do have about 60 participants that never saw the nurse
>> and I have quite a few where they saw the nurse but no doctor.
>> At this point I have excluded both sets of participants from the
>> analysis. Can you suggest any type of analysis where I could use them.
>> I don't have any where I know the nurse did not communicate with the
>> doctor.
>> (The kappa for participant vs. MD is .133 and participant vs. nurse
>> .047. For nurse vs. doctor it's .388!)
>> Thanks!
>> Jan
>>
>> On 11/12/2010 3:55 PM, Mike Palij wrote:
>>> If I understand what you say below correctly, you have a situation
>>> with two sources of agreement (more generally association):
>>>
>>> (A)  Agreement/association due to nurse's communication with
>>> the doctor
>>>
>>> and
>>>
>>> (B)  Agreement between doctor and nurse based on independent
>>> observation of patient
>>>
>>> If you had some doctors who had not communicated with the
>>> nurses before observing the patients, you might be able to
>>> estimate how much agreement is due to (B) alone.  If (B) alone
>>> were not significantly different from the (A) + (B) situation,
>>> then you might be able to argue that the nurse's report had
>>> no impact (i.e., doctors effectively ignored what the nurses toldl
>>> them).  However, on the basis of the anchoring and adjustment
>>> heuristic, it is likely that the doctor's response was influenced
>>> by the nurse's report.  So, the doctor's response is confounded
>>> with the nurse's response and there doesn't appear to be anyway
>>> to unconfound them unless you're able to get additional nurses
>>> and doctors to independently assess patients.
>>>
>>> -Mike Palij
>>> New York University
>>> [hidden email]
>>>
>>>
>>>
>>> ----- Original Message -----
>>> From: "J McClure"<[hidden email]>
>>> To:<[hidden email]>
>>> Sent: Friday, November 12, 2010 5:14 PM
>>> Subject: statistical test if raters not independent of each other
>>>
>>>
>>>> Hi,
>>>> Participants in my study completed a survey of suicidal thoughts and
>>>> behaviors.  Then a nurse (not involved in the study) interviewed the
>>>> participant and interviewed the patient and based on that completed a
>>>> non research form about suicidal thoughts and behaviors.  Then, prior to
>>>> the doctor interviewing the patient, nurse gave the doctor a verbal
>>>> report of the patient's suicidal thoughts and behaviors. The doctor then
>>>> wrote a clinical note which included his/her assessment of suicidal
>>>> thoughts and behaviors. (When I designed and started the study I did not
>>>> know the nurse gave the doctor a report prior to the doctor's interview
>>>> of the patient).
>>>> I created a summary variable for the suicidal thoughts and behaviors,
>>>> SRisk. It has 5 categories (none, passive, active, plan, plan and
>>>> preparation). There is a summary variable for the participant (based on
>>>> the survey results), the nurse (based on their completion of a clinical
>>>> form), and the doctor (based on their clinical note).
>>>> I started by using kappa and just looked at pairwise comparisons:
>>>> participant vs. nurse, participant vs. doctor, and doctor vs. nurse.
>>>> I realized however that the doctor and nurse are not independent since
>>>> the nurse gives the doctor a verbal report of his/her findings prior to
>>>> the doctor interviewing the patient.
>>>> Are there any tests that would look at the nurse vs. doctor agreement?
>>>> If not, I'll leave that out of my analysis.
>>>> Thanks for any ideas,
>>>> Jan
>>>>
>>>> =====================
>>>> To manage your subscription to SPSSX-L, send a message to
>>>> [hidden email] (not to SPSSX-L), with no body text except the
>>>> command. To leave the list, send the command
>>>> SIGNOFF SPSSX-L
>>>> For a list of commands to manage subscriptions, send the command
>>>> INFO REFCARD
>>> =====================
>>> To manage your subscription to SPSSX-L, send a message to
>>> [hidden email] (not to SPSSX-L), with no body text except the
>>> command. To leave the list, send the command
>>> SIGNOFF SPSSX-L
>>> For a list of commands to manage subscriptions, send the command
>>> INFO REFCARD
>>>
>> =====================
>> To manage your subscription to SPSSX-L, send a message to
>> [hidden email] (not to SPSSX-L), with no body text except the
>> command. To leave the list, send the command
>> SIGNOFF SPSSX-L
>> For a list of commands to manage subscriptions, send the command
>> INFO REFCARD
>>
>>
>
> -----
> --
> Bruce Weaver
> [hidden email]
> http://sites.google.com/a/lakeheadu.ca/bweaver/
>
> "When all else fails, RTFM."
>
> NOTE: My Hotmail account is not monitored regularly.
> To send me an e-mail, please use the address shown above.
>
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/statistical-test-if-raters-not-independent-of-each-other-tp3262985p3263887.html
> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: statistical test if raters not independent of each other

Ryan
In reply to this post by Art Kendall
Sure. Let's perform the analysis on the same data that was used in the
SAS-L post to which I referred previously. Further, let's assume that
all ratees are rated by the same raters, and that the raters are
considered to be a random sample of all raters.

*-----------Start Code------------*.

data list list / Ratee Rater Rating.
begin data
1 1 9
1 2 2
1 3 5
1 4 8
2 1 6
2 2 1
2 3 3
2 4 2
3 1 8
3 2 4
3 3 6
3 4 8
4 1 7
4 2 1
4 3 2
4 4 6
5 1 10
5 2 5
5 3 6
5 4 9
6 1 6
6 2 2
6 3 4
6 4 7
end data.

MIXED Rating BY Ratee Rater
  /FIXED=| SSTYPE(3)
  /METHOD=REML
  /PRINT=G
  /RANDOM=Ratee Rater | COVTYPE(VC).

*-------------End Code------------*.

Before calculating the ICC using the variance components reported in
the MIXED output, it is worth noting that the ICC can be written as:

            Var(Between Ratee)
           -----------------------------------
                      Var(Total)

where

Var(Between Ratee) = Between Ratee Variance
Var(Total) = Between Ratee Variance + Between Rater Variance + Error

Using the formula above, we calculate the ICC from the "Estimates of
Covariance Parameter Estimates" table to be:

                           2.56
 ICC =    -------------------------    = 0.29
               2.56 + 5.24 1.02

HTH,

Ryan

On Sun, Nov 14, 2010 at 10:17 AM, Art Kendall <[hidden email]> wrote:

> If you have details of the further work perhaps you could post them and send
> them to
> [hidden email]
>
> Art Kendall
> Social Research Consultants
>
> On 11/14/2010 10:10 AM, R B wrote:
>>
>> Hi, Bruce:
>>
>> I'm a fan of computing the ICC via linear mixed modelling since it can
>> handle various scenarios (e.g., see Shrout and Fleiss, 1979) , as
>> suggested by a regular poster on SAS-L earlier this year.
>>
>> See link for details:
>> http://www.listserv.uga.edu/cgi-bin/wa?A2=ind1004B&L=sas-l&P=R20678
>>
>> So, I see the MIXED procedure in SAS (and SPSS) as being particularly
>> useful given its flexibility (e.g., raters may be considered a random
>> sample of all possible raters; raters may be considered the entire
>> population of raters; there does not need not to be a consistent
>> raters set...). This is not to say that RELIABILITY cannot handle
>> various scenarios as well. Also, I agree that calculating the ICC in
>> the MIXED procedure in SPSS requires some additional work, especially
>> if one wants the corresponding 95% confidence limits.
>>
>> Ryan
>>
>> On Sat, Nov 13, 2010 at 4:42 PM, Bruce Weaver<[hidden email]>
>>  wrote:
>>>
>>> Hi Ryan.  If there are no missing data, I don't see any great advantage
>>> to
>>> using MIXED.  If you use RELIABILITY, the ICC and it's 95% CI are
>>> reported
>>> in the output.  I don't think that is so with MIXED, is it?  I believe
>>> you
>>> have to do your own computations using variance components.
>>>
>>> Bruce
>>>
>>>
>>> R B wrote:
>>>>
>>>> If the categories are ordinal, then the OP might consider computing
>>>> an intraclass correlation coefficient (ICC) via the MIXED procedure.
>>>> Fitting
>>>> a linear mixed model (LMM) allows one to compute an ICC after
>>>> decomposing
>>>> the variance from various sources.  I haven't followed this thread
>>>> closely
>>>> enough to state unequivocally that an LMM would do the trick for this
>>>> particular design, but based on what I've read so far, it seems like an
>>>> option to consider.
>>>>
>>>> Ryan
>>>> On Sat, Nov 13, 2010 at 3:02 PM, Bruce Weaver
>>>> <[hidden email]>wrote:
>>>>
>>>>> The 5 categories (none, passive, active, plan, plan and preparation)
>>>>> appear
>>>>> to be ordinal, so weighted kappa could be computed rather than kappa.
>>>>> And
>>>>> it will almost certainly show better agreement.
>>>>>
>>>>> Also, weighted kappa (with quadratic) weights is equivalent to the most
>>>>> common form of intra-class correlation, so you can just compute the ICC
>>>>> (via
>>>>> RELIABILITY), and call it weighted kappa if that's what will work
>>>>> better
>>>>> for
>>>>> your intended audience or readership.  If you need a reference, check
>>>>> out
>>>>> Biostatistics - The Bare Essentials (by Norman&  Streiner).  I believe
>>>>> you
>>>>> can find it via Google Books.  IIRC, they discuss this issue in the
>>>>> chapter
>>>>> on repeated measures ANOVA.
>>>>>
>>>>> Finally, you posted another message asking about confidence intervals.
>>>>> If
>>>>> you compute the ICC via RELIABILITY, it will give you a 95% CI.
>>>>>
>>>>> HTH.
>>>>>
>>>>>
>>>>>
>>>>> J McClure wrote:
>>>>>>
>>>>>> Thanks Mike.  I do have about 60 participants that never saw the nurse
>>>>>> and I have quite a few where they saw the nurse but no doctor.
>>>>>> At this point I have excluded both sets of participants from the
>>>>>> analysis. Can you suggest any type of analysis where I could use them.
>>>>>> I don't have any where I know the nurse did not communicate with the
>>>>>> doctor.
>>>>>> (The kappa for participant vs. MD is .133 and participant vs. nurse
>>>>>> .047. For nurse vs. doctor it's .388!)
>>>>>> Thanks!
>>>>>> Jan
>>>>>>
>>>>>> On 11/12/2010 3:55 PM, Mike Palij wrote:
>>>>>>>
>>>>>>> If I understand what you say below correctly, you have a situation
>>>>>>> with two sources of agreement (more generally association):
>>>>>>>
>>>>>>> (A)  Agreement/association due to nurse's communication with
>>>>>>> the doctor
>>>>>>>
>>>>>>> and
>>>>>>>
>>>>>>> (B)  Agreement between doctor and nurse based on independent
>>>>>>> observation of patient
>>>>>>>
>>>>>>> If you had some doctors who had not communicated with the
>>>>>>> nurses before observing the patients, you might be able to
>>>>>>> estimate how much agreement is due to (B) alone.  If (B) alone
>>>>>>> were not significantly different from the (A) + (B) situation,
>>>>>>> then you might be able to argue that the nurse's report had
>>>>>>> no impact (i.e., doctors effectively ignored what the nurses toldl
>>>>>>> them).  However, on the basis of the anchoring and adjustment
>>>>>>> heuristic, it is likely that the doctor's response was influenced
>>>>>>> by the nurse's report.  So, the doctor's response is confounded
>>>>>>> with the nurse's response and there doesn't appear to be anyway
>>>>>>> to unconfound them unless you're able to get additional nurses
>>>>>>> and doctors to independently assess patients.
>>>>>>>
>>>>>>> -Mike Palij
>>>>>>> New York University
>>>>>>> [hidden email]
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ----- Original Message -----
>>>>>>> From: "J McClure"<[hidden email]>
>>>>>>> To:<[hidden email]>
>>>>>>> Sent: Friday, November 12, 2010 5:14 PM
>>>>>>> Subject: statistical test if raters not independent of each other
>>>>>>>
>>>>>>>
>>>>>>>> Hi,
>>>>>>>> Participants in my study completed a survey of suicidal thoughts and
>>>>>>>> behaviors.  Then a nurse (not involved in the study) interviewed the
>>>>>>>> participant and interviewed the patient and based on that completed
>>>>>>>> a
>>>>>>>> non research form about suicidal thoughts and behaviors.  Then,
>>>>>>>> prior
>>>>>
>>>>> to
>>>>>>>>
>>>>>>>> the doctor interviewing the patient, nurse gave the doctor a verbal
>>>>>>>> report of the patient's suicidal thoughts and behaviors. The doctor
>>>>>
>>>>> then
>>>>>>>>
>>>>>>>> wrote a clinical note which included his/her assessment of suicidal
>>>>>>>> thoughts and behaviors. (When I designed and started the study I did
>>>>>
>>>>> not
>>>>>>>>
>>>>>>>> know the nurse gave the doctor a report prior to the doctor's
>>>>>
>>>>> interview
>>>>>>>>
>>>>>>>> of the patient).
>>>>>>>> I created a summary variable for the suicidal thoughts and
>>>>>>>> behaviors,
>>>>>>>> SRisk. It has 5 categories (none, passive, active, plan, plan and
>>>>>>>> preparation). There is a summary variable for the participant (based
>>>>>
>>>>> on
>>>>>>>>
>>>>>>>> the survey results), the nurse (based on their completion of a
>>>>>
>>>>> clinical
>>>>>>>>
>>>>>>>> form), and the doctor (based on their clinical note).
>>>>>>>> I started by using kappa and just looked at pairwise comparisons:
>>>>>>>> participant vs. nurse, participant vs. doctor, and doctor vs. nurse.
>>>>>>>> I realized however that the doctor and nurse are not independent
>>>>>
>>>>> since
>>>>>>>>
>>>>>>>> the nurse gives the doctor a verbal report of his/her findings prior
>>>>>
>>>>> to
>>>>>>>>
>>>>>>>> the doctor interviewing the patient.
>>>>>>>> Are there any tests that would look at the nurse vs. doctor
>>>>>
>>>>> agreement?
>>>>>>>>
>>>>>>>> If not, I'll leave that out of my analysis.
>>>>>>>> Thanks for any ideas,
>>>>>>>> Jan
>>>>>>>>
>>>>>>>> =====================
>>>>>>>> To manage your subscription to SPSSX-L, send a message to
>>>>>>>> [hidden email] (not to SPSSX-L), with no body text except
>>>>>
>>>>> the
>>>>>>>>
>>>>>>>> command. To leave the list, send the command
>>>>>>>> SIGNOFF SPSSX-L
>>>>>>>> For a list of commands to manage subscriptions, send the command
>>>>>>>> INFO REFCARD
>>>>>>>
>>>>>>> =====================
>>>>>>> To manage your subscription to SPSSX-L, send a message to
>>>>>>> [hidden email] (not to SPSSX-L), with no body text except
>>>>>
>>>>> the
>>>>>>>
>>>>>>> command. To leave the list, send the command
>>>>>>> SIGNOFF SPSSX-L
>>>>>>> For a list of commands to manage subscriptions, send the command
>>>>>>> INFO REFCARD
>>>>>>>
>>>>>> =====================
>>>>>> To manage your subscription to SPSSX-L, send a message to
>>>>>> [hidden email] (not to SPSSX-L), with no body text except
>>>>>
>>>>> the
>>>>>>
>>>>>> command. To leave the list, send the command
>>>>>> SIGNOFF SPSSX-L
>>>>>> For a list of commands to manage subscriptions, send the command
>>>>>> INFO REFCARD
>>>>>>
>>>>>>
>>>>>
>>>>> -----
>>>>> --
>>>>> Bruce Weaver
>>>>> [hidden email]
>>>>> http://sites.google.com/a/lakeheadu.ca/bweaver/
>>>>>
>>>>> "When all else fails, RTFM."
>>>>>
>>>>> NOTE: My Hotmail account is not monitored regularly.
>>>>> To send me an e-mail, please use the address shown above.
>>>>>
>>>>> --
>>>>> View this message in context:
>>>>>
>>>>> http://spssx-discussion.1045642.n5.nabble.com/statistical-test-if-raters-not-independent-of-each-other-tp3262985p3263887.html
>>>>> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>>>>>
>>>>> =====================
>>>>> To manage your subscription to SPSSX-L, send a message to
>>>>> [hidden email] (not to SPSSX-L), with no body text except
>>>>> the
>>>>> command. To leave the list, send the command
>>>>> SIGNOFF SPSSX-L
>>>>> For a list of commands to manage subscriptions, send the command
>>>>> INFO REFCARD
>>>>>
>>>>
>>>
>>> -----
>>> --
>>> Bruce Weaver
>>> [hidden email]
>>> http://sites.google.com/a/lakeheadu.ca/bweaver/
>>>
>>> "When all else fails, RTFM."
>>>
>>> NOTE: My Hotmail account is not monitored regularly.
>>> To send me an e-mail, please use the address shown above.
>>>
>>> --
>>> View this message in context:
>>> http://spssx-discussion.1045642.n5.nabble.com/statistical-test-if-raters-not-independent-of-each-other-tp3262985p3263964.html
>>> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>>>
>>> =====================
>>> To manage your subscription to SPSSX-L, send a message to
>>> [hidden email] (not to SPSSX-L), with no body text except the
>>> command. To leave the list, send the command
>>> SIGNOFF SPSSX-L
>>> For a list of commands to manage subscriptions, send the command
>>> INFO REFCARD
>>
>> =====================
>> To manage your subscription to SPSSX-L, send a message to
>> [hidden email] (not to SPSSX-L), with no body text except the
>> command. To leave the list, send the command
>> SIGNOFF SPSSX-L
>> For a list of commands to manage subscriptions, send the command
>> INFO REFCARD
>>
>

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: statistical test if raters not independent of each other

Bruce Weaver
Administrator
Getting the 95% CI will not be quite so straightforward though.  So *when* RELIABILITY will give me the answer, I'll use it rather than MIXED--at least if I also want the confidence interval.


R B wrote
Sure. Let's perform the analysis on the same data that was used in the
SAS-L post to which I referred previously. Further, let's assume that
all ratees are rated by the same raters, and that the raters are
considered to be a random sample of all raters.

--- snip ---


            Var(Between Ratee)
           -----------------------------------
                      Var(Total)

where

Var(Between Ratee) = Between Ratee Variance
Var(Total) = Between Ratee Variance + Between Rater Variance + Error

Using the formula above, we calculate the ICC from the "Estimates of
Covariance Parameter Estimates" table to be:

                           2.56
 ICC =    -------------------------    = 0.29
               2.56 + 5.24 1.02

HTH,

Ryan

On Sun, Nov 14, 2010 at 10:17 AM, Art Kendall <Art@drkendall.org> wrote:
> If you have details of the further work perhaps you could post them and send
> them to
> suggest@us.ibm.com
>
> Art Kendall
> Social Research Consultants
>
> On 11/14/2010 10:10 AM, R B wrote:
>>
>> Hi, Bruce:
>>
>> I'm a fan of computing the ICC via linear mixed modelling since it can
>> handle various scenarios (e.g., see Shrout and Fleiss, 1979) , as
>> suggested by a regular poster on SAS-L earlier this year.
>>
>> See link for details:
>> http://www.listserv.uga.edu/cgi-bin/wa?A2=ind1004B&L=sas-l&P=R20678
>>
>> So, I see the MIXED procedure in SAS (and SPSS) as being particularly
>> useful given its flexibility (e.g., raters may be considered a random
>> sample of all possible raters; raters may be considered the entire
>> population of raters; there does not need not to be a consistent
>> raters set...). This is not to say that RELIABILITY cannot handle
>> various scenarios as well. Also, I agree that calculating the ICC in
>> the MIXED procedure in SPSS requires some additional work, especially
>> if one wants the corresponding 95% confidence limits.
>>
>> Ryan
>>
>> On Sat, Nov 13, 2010 at 4:42 PM, Bruce Weaver<bruce.weaver@hotmail.com>
>>  wrote:
>>>
>>> Hi Ryan.  If there are no missing data, I don't see any great advantage
>>> to
>>> using MIXED.  If you use RELIABILITY, the ICC and it's 95% CI are
>>> reported
>>> in the output.  I don't think that is so with MIXED, is it?  I believe
>>> you
>>> have to do your own computations using variance components.
>>>
>>> Bruce
>>>
>>>
>>> R B wrote:
>>>>
>>>> If the categories are ordinal, then the OP might consider computing
>>>> an intraclass correlation coefficient (ICC) via the MIXED procedure.
>>>> Fitting
>>>> a linear mixed model (LMM) allows one to compute an ICC after
>>>> decomposing
>>>> the variance from various sources.  I haven't followed this thread
>>>> closely
>>>> enough to state unequivocally that an LMM would do the trick for this
>>>> particular design, but based on what I've read so far, it seems like an
>>>> option to consider.
>>>>
>>>> Ryan
>>>> On Sat, Nov 13, 2010 at 3:02 PM, Bruce Weaver
>>>> <bruce.weaver@hotmail.com>wrote:
>>>>
>>>>> The 5 categories (none, passive, active, plan, plan and preparation)
>>>>> appear
>>>>> to be ordinal, so weighted kappa could be computed rather than kappa.
>>>>> And
>>>>> it will almost certainly show better agreement.
>>>>>
>>>>> Also, weighted kappa (with quadratic) weights is equivalent to the most
>>>>> common form of intra-class correlation, so you can just compute the ICC
>>>>> (via
>>>>> RELIABILITY), and call it weighted kappa if that's what will work
>>>>> better
>>>>> for
>>>>> your intended audience or readership.  If you need a reference, check
>>>>> out
>>>>> Biostatistics - The Bare Essentials (by Norman&  Streiner).  I believe
>>>>> you
>>>>> can find it via Google Books.  IIRC, they discuss this issue in the
>>>>> chapter
>>>>> on repeated measures ANOVA.
>>>>>
>>>>> Finally, you posted another message asking about confidence intervals.
>>>>> If
>>>>> you compute the ICC via RELIABILITY, it will give you a 95% CI.
>>>>>
>>>>> HTH.
>>>>>
>>>>>
>>>>>
>>>>> J McClure wrote:
>>>>>>
>>>>>> Thanks Mike.  I do have about 60 participants that never saw the nurse
>>>>>> and I have quite a few where they saw the nurse but no doctor.
>>>>>> At this point I have excluded both sets of participants from the
>>>>>> analysis. Can you suggest any type of analysis where I could use them.
>>>>>> I don't have any where I know the nurse did not communicate with the
>>>>>> doctor.
>>>>>> (The kappa for participant vs. MD is .133 and participant vs. nurse
>>>>>> .047. For nurse vs. doctor it's .388!)
>>>>>> Thanks!
>>>>>> Jan
>>>>>>
>>>>>> On 11/12/2010 3:55 PM, Mike Palij wrote:
>>>>>>>
>>>>>>> If I understand what you say below correctly, you have a situation
>>>>>>> with two sources of agreement (more generally association):
>>>>>>>
>>>>>>> (A)  Agreement/association due to nurse's communication with
>>>>>>> the doctor
>>>>>>>
>>>>>>> and
>>>>>>>
>>>>>>> (B)  Agreement between doctor and nurse based on independent
>>>>>>> observation of patient
>>>>>>>
>>>>>>> If you had some doctors who had not communicated with the
>>>>>>> nurses before observing the patients, you might be able to
>>>>>>> estimate how much agreement is due to (B) alone.  If (B) alone
>>>>>>> were not significantly different from the (A) + (B) situation,
>>>>>>> then you might be able to argue that the nurse's report had
>>>>>>> no impact (i.e., doctors effectively ignored what the nurses toldl
>>>>>>> them).  However, on the basis of the anchoring and adjustment
>>>>>>> heuristic, it is likely that the doctor's response was influenced
>>>>>>> by the nurse's report.  So, the doctor's response is confounded
>>>>>>> with the nurse's response and there doesn't appear to be anyway
>>>>>>> to unconfound them unless you're able to get additional nurses
>>>>>>> and doctors to independently assess patients.
>>>>>>>
>>>>>>> -Mike Palij
>>>>>>> New York University
>>>>>>> mp26@nyu.edu
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ----- Original Message -----
>>>>>>> From: "J McClure"<mc006@pacbell.net>
>>>>>>> To:<SPSSX-L@LISTSERV.UGA.EDU>
>>>>>>> Sent: Friday, November 12, 2010 5:14 PM
>>>>>>> Subject: statistical test if raters not independent of each other
>>>>>>>
>>>>>>>
>>>>>>>> Hi,
>>>>>>>> Participants in my study completed a survey of suicidal thoughts and
>>>>>>>> behaviors.  Then a nurse (not involved in the study) interviewed the
>>>>>>>> participant and interviewed the patient and based on that completed
>>>>>>>> a
>>>>>>>> non research form about suicidal thoughts and behaviors.  Then,
>>>>>>>> prior
>>>>>
>>>>> to
>>>>>>>>
>>>>>>>> the doctor interviewing the patient, nurse gave the doctor a verbal
>>>>>>>> report of the patient's suicidal thoughts and behaviors. The doctor
>>>>>
>>>>> then
>>>>>>>>
>>>>>>>> wrote a clinical note which included his/her assessment of suicidal
>>>>>>>> thoughts and behaviors. (When I designed and started the study I did
>>>>>
>>>>> not
>>>>>>>>
>>>>>>>> know the nurse gave the doctor a report prior to the doctor's
>>>>>
>>>>> interview
>>>>>>>>
>>>>>>>> of the patient).
>>>>>>>> I created a summary variable for the suicidal thoughts and
>>>>>>>> behaviors,
>>>>>>>> SRisk. It has 5 categories (none, passive, active, plan, plan and
>>>>>>>> preparation). There is a summary variable for the participant (based
>>>>>
>>>>> on
>>>>>>>>
>>>>>>>> the survey results), the nurse (based on their completion of a
>>>>>
>>>>> clinical
>>>>>>>>
>>>>>>>> form), and the doctor (based on their clinical note).
>>>>>>>> I started by using kappa and just looked at pairwise comparisons:
>>>>>>>> participant vs. nurse, participant vs. doctor, and doctor vs. nurse.
>>>>>>>> I realized however that the doctor and nurse are not independent
>>>>>
>>>>> since
>>>>>>>>
>>>>>>>> the nurse gives the doctor a verbal report of his/her findings prior
>>>>>
>>>>> to
>>>>>>>>
>>>>>>>> the doctor interviewing the patient.
>>>>>>>> Are there any tests that would look at the nurse vs. doctor
>>>>>
>>>>> agreement?
>>>>>>>>
>>>>>>>> If not, I'll leave that out of my analysis.
>>>>>>>> Thanks for any ideas,
>>>>>>>> Jan
>>>>>>>>
>>>>>>>> =====================
>>>>>>>> To manage your subscription to SPSSX-L, send a message to
>>>>>>>> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except
>>>>>
>>>>> the
>>>>>>>>
>>>>>>>> command. To leave the list, send the command
>>>>>>>> SIGNOFF SPSSX-L
>>>>>>>> For a list of commands to manage subscriptions, send the command
>>>>>>>> INFO REFCARD
>>>>>>>
>>>>>>> =====================
>>>>>>> To manage your subscription to SPSSX-L, send a message to
>>>>>>> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except
>>>>>
>>>>> the
>>>>>>>
>>>>>>> command. To leave the list, send the command
>>>>>>> SIGNOFF SPSSX-L
>>>>>>> For a list of commands to manage subscriptions, send the command
>>>>>>> INFO REFCARD
>>>>>>>
>>>>>> =====================
>>>>>> To manage your subscription to SPSSX-L, send a message to
>>>>>> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except
>>>>>
>>>>> the
>>>>>>
>>>>>> command. To leave the list, send the command
>>>>>> SIGNOFF SPSSX-L
>>>>>> For a list of commands to manage subscriptions, send the command
>>>>>> INFO REFCARD
>>>>>>
>>>>>>
>>>>>
>>>>> -----
>>>>> --
>>>>> Bruce Weaver
>>>>> bweaver@lakeheadu.ca
>>>>> http://sites.google.com/a/lakeheadu.ca/bweaver/
>>>>>
>>>>> "When all else fails, RTFM."
>>>>>
>>>>> NOTE: My Hotmail account is not monitored regularly.
>>>>> To send me an e-mail, please use the address shown above.
>>>>>
>>>>> --
>>>>> View this message in context:
>>>>>
>>>>> http://spssx-discussion.1045642.n5.nabble.com/statistical-test-if-raters-not-independent-of-each-other-tp3262985p3263887.html
>>>>> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>>>>>
>>>>> =====================
>>>>> To manage your subscription to SPSSX-L, send a message to
>>>>> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except
>>>>> the
>>>>> command. To leave the list, send the command
>>>>> SIGNOFF SPSSX-L
>>>>> For a list of commands to manage subscriptions, send the command
>>>>> INFO REFCARD
>>>>>
>>>>
>>>
>>> -----
>>> --
>>> Bruce Weaver
>>> bweaver@lakeheadu.ca
>>> http://sites.google.com/a/lakeheadu.ca/bweaver/
>>>
>>> "When all else fails, RTFM."
>>>
>>> NOTE: My Hotmail account is not monitored regularly.
>>> To send me an e-mail, please use the address shown above.
>>>
>>> --
>>> View this message in context:
>>> http://spssx-discussion.1045642.n5.nabble.com/statistical-test-if-raters-not-independent-of-each-other-tp3262985p3263964.html
>>> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>>>
>>> =====================
>>> To manage your subscription to SPSSX-L, send a message to
>>> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
>>> command. To leave the list, send the command
>>> SIGNOFF SPSSX-L
>>> For a list of commands to manage subscriptions, send the command
>>> INFO REFCARD
>>
>> =====================
>> To manage your subscription to SPSSX-L, send a message to
>> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
>> command. To leave the list, send the command
>> SIGNOFF SPSSX-L
>> For a list of commands to manage subscriptions, send the command
>> INFO REFCARD
>>
>

=====================
To manage your subscription to SPSSX-L, send a message to
LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).