icc in spss

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

icc in spss

Anter
Hello all,

From what I understand, interrater reliability is an index for consistency and refers to two or more raters ranking similarly one or more items.  I have a scale of 28 items and 100 people rating these items.  The situation can be described as: 100 raters rating 1 person (the organization) across 28 items (that make the scale). The 30 items are characteristics of organizational climate. I want to compute an organizational  aggregate from the values and subtract the individual characteristics measured with the same scale (only referring to the individual) from it. Can I assess interrater reliability using ICC (1) that is offered as an option in the reliability analysis in SPSS?  I need to know if reliability exists among organizational members for the climate characteristics. Please correct me if I missused the ICC in this case.

Thank you,


Andra Toader
E-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: icc in spss

Rich Ulrich
Sorry, yes, you would be misusing ICC in this case.
There is no way of saying anything about the "consistency of
raters across cases" when you have exactly 1 case.

Instead, each rater has given their ratings, for something like
"perceived climate."   Are those organized into factors, to make
up a few subscales? - That is something you should try to do,
because (a) you really don't have 28 separate hypotheses of equal
importance, and (b) individual items have relatively poor reliability.
You can use procedure Reliability to get Cronbach's alpha, to measure
the internal consistency of scales you derive.

You have some personal characteristics of those 100 that you
intend to relate to one or more of the subscales  How many
characteristics?  For what purposes?  You can look at correlations
between pairs of variables, and you probably have some place
where multiple regression could be useful. 

But there is no application that I see for ICC.

--
Rich Ulrich



Date: Tue, 9 Oct 2012 10:06:52 -0700
From: [hidden email]
Subject: icc in spss
To: [hidden email]

Hello all,

From what I understand, interrater reliability is an index for consistency and refers to two or more raters ranking similarly one or more items.  I have a scale of 28 items and 100 people rating these items.  The situation can be described as: 100 raters rating 1 person (the organization) across 28 items (that make the scale). The 30 items are characteristics of organizational climate. I want to compute an organizational  aggregate from the values and subtract the individual characteristics measured with the same scale (only referring to the individual) from it. Can I assess interrater reliability using ICC (1) that is offered as an option in the reliability analysis in SPSS?  I need to know if reliability exists among organizational members for the climate characteristics. Please correct me if I missused the ICC in this case.

Thank you,


Andra Toader
E-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: icc in spss

Anter
Yes, the items are supposed to make 7 scales, but these scales haven't emerged for this sample application.  I have determined α Cronbach's, but I need the interrater reliability to aggregate organizational characteristics. I will use the aggregate to compare it with individual measures of the same characteristics. For example, someone rates how much openness exists in his organization, and how much openness he would like to exist. Say openness is an item in my scale. And all the people rate it according to how much openness they perceive to exist in their organization. If their ratings are consistent then I can compute an aggregate and use it to compare my individual desire for openness to what my organization if offering now. I can correlate my profile with my organization's or subtract my score from the organization score. This is basically what I want to do. Only that there are 28 characteristics as the one exemplified. And I need to know if their ratings are reliable. So, just taking the items as a scale and determining interrater reliability across all items yielding an ICC is incorrect in this case?


Andra Toader
E-mail: [hidden email]



--- On Tue, 10/9/12, Rich Ulrich <[hidden email]> wrote:

From: Rich Ulrich <[hidden email]>
Subject: Re: icc in spss
To: [hidden email]
Date: Tuesday, October 9, 2012, 8:06 PM

Sorry, yes, you would be misusing ICC in this case.
There is no way of saying anything about the "consistency of
raters across cases" when you have exactly 1 case.

Instead, each rater has given their ratings, for something like
"perceived climate."   Are those organized into factors, to make
up a few subscales? - That is something you should try to do,
because (a) you really don't have 28 separate hypotheses of equal
importance, and (b) individual items have relatively poor reliability.
You can use procedure Reliability to get Cronbach's alpha, to measure
the internal consistency of scales you derive.

You have some personal characteristics of those 100 that you
intend to relate to one or more of the subscales  How many
characteristics?  For what purposes?  You can look at correlations
between pairs of variables, and you probably have some place
where multiple regression could be useful. 

But there is no application that I see for ICC.

--
Rich Ulrich



Date: Tue, 9 Oct 2012 10:06:52 -0700
From: [hidden email]
Subject: icc in spss
To: [hidden email]

Hello all,

From what I understand, interrater reliability is an index for consistency and refers to two or more raters ranking similarly one or more items.  I have a scale of 28 items and 100 people rating these items.  The situation can be described as: 100 raters rating 1 person (the organization) across 28 items (that make the scale). The 30 items are characteristics of organizational climate. I want to compute an organizational  aggregate from the values and subtract the individual characteristics measured with the same scale (only referring to the individual) from it. Can I assess interrater reliability using ICC (1) that is offered as an option in the reliability analysis in SPSS?  I need to know if reliability exists among organizational members for the climate characteristics. Please correct me if I missused the ICC in this case.

Thank you,


Andra Toader
E-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: icc in spss

Rich Ulrich
Why are you doing this analysis?  Has someone tried to tell you
what it should be?  You do not have the two-say design needed for
an ICC.   - In a two-way design of Raters by Companies, where each
rater rates each company, you might essentially use the large difference
between companies to demonstrate that Raters are seeing the same
thing.  That is the ICC.  If the companies are all similar, you will not
see a high ICC.

After this post, I am not sure whether you have one company or
several companies, but I think each person is rating only one company.
This would allow you to make a statement about "discriminative validity"
if there are several companies.

I would assume that the rating scale was designed to reflect latent factors
among companies, not among raters.  Especially if there is only one
company, what a factor analysis could show you would *only*  be the
psychological factors for Raters.  If there are only a few companies, there
would still be a large influence for Raters, rather than Companies. 

So, I would say that there is an excuse to use the presumed scales, even
if they do not yield a high Cronbach.

I don't understand what you are after, using the "desire" scoring.  Are
you looking for "Dissatisfaction"?

--
Rich Ulrich


Date: Tue, 9 Oct 2012 14:42:48 -0700
From: [hidden email]
Subject: Re: icc in spss
To: [hidden email]

Yes, the items are supposed to make 7 scales, but these scales haven't emerged for this sample application.  I have determined α Cronbach's, but I need the interrater reliability to aggregate organizational characteristics. I will use the aggregate to compare it with individual measures of the same characteristics. For example, someone rates how much openness exists in his organization, and how much openness he would like to exist. Say openness is an item in my scale. And all the people rate it according to how much openness they perceive to exist in their organization. If their ratings are consistent then I can compute an aggregate and use it to compare my individual desire for openness to what my organization if offering now. I can correlate my profile with my organization's or subtract my score from the organization score. This is basically what I want to do. Only that there are 28 characteristics as the one exemplified. And I need to know if their ratings are reliable. So, just taking the items as a scale and determining interrater reliability across all items yielding an ICC is incorrect in this case?




Reply | Threaded
Open this post in threaded view
|

Re: icc in spss

Anter

Thank you for the answer.

Why are you doing this analysis?  Has someone tried to tell you
what it should be?  You do not have the two-say design needed for
an ICC.   - In a two-way design of Raters by Companies, where each
rater rates each company, you might essentially use the large difference
between companies to demonstrate that Raters are seeing the same
thing.  That is the ICC.  If the companies are all similar, you will not
see a high ICC.

I thought that I can use it even if many raters rate just one subject ..

After this post, I am not sure whether you have one company or
several companies, but I think each person is rating only one company.
This would allow you to make a statement about "discriminative validity"
if there are several companies.

I have one company.

I would assume that the rating scale was designed to reflect latent factors
among companies, not among raters.  Especially if there is only one
company, what a factor analysis could show you would *only*  be the
psychological factors for Raters.  If there are only a few companies, there
would still be a large influence for Raters, rather than Companies. 


So, I would say that there is an excuse to use the presumed scales, even
if they do not yield a high Cronbach.

The overall reliability of the scale composed of the 28 items is .90. I haven't checked reliability of the theoretical scales or factors from the initial validation study because different studies found different numbers of factors. As such, I tried to determine the number of factors that resulted from my application and compare them with the original factors to assess the scale validity.
 
I don't understand what you are after, using the "desire" scoring.  Are
you looking for "Dissatisfaction"?

Basically, it is a commensurate measurement of ideal vs. real climate. I am assessing their <fit> with the real climate not their satisfaction. I am also trying to determine an aggregate of the climate characteristics to compare it with the individual's characteristics. I checked for agreement on the items but I also wanted to know how reliable the ratings are .. Is there no index that I can use for such a case?


--
Rich Ulrich


Date: Tue, 9 Oct 2012 14:42:48 -0700
From: [hidden email]
Subject: Re: icc in spss
To: [hidden email]

Yes, the items are supposed to make 7 scales, but these scales haven't emerged for this sample application.  I have determined α Cronbach's, but I need the interrater reliability to aggregate organizational characteristics. I will use the aggregate to compare it with individual measures of the same characteristics. For example, someone rates how much openness exists in his organization, and how much openness he would like to exist. Say openness is an item in my scale. And all the people rate it according to how much openness they perceive to exist in their organization. If their ratings are consistent then I can compute an aggregate and use it to compare my individual desire for openness to what my organization if offering now. I can correlate my profile with my organization's or subtract my score from the organization score. This is basically what I want to do. Only that there are 28 characteristics as the one exemplified. And I need to know if their ratings are reliable. So, just taking the items as a scale and determining interrater reliability across all items yielding an ICC is incorrect in this case?




Reply | Threaded
Open this post in threaded view
|

Re: icc in spss

Rich Ulrich
There are many different kinds of "reliability" -- inter-rater
(including ICC), pre-post, construct and differential are
several of the kinds that you do not have the data for.

What you have, with everyone rating the same company,
is pretty much limited to Cronbach's alpha, the reliability
estimated by internal consistency for a scale.

What can you do with these data, including the "desirable"
set? 

I would compute the average item scores for some factors,
just to minimize the chance outcomes by using a smaller
number of main analyses, and by increasing the reliability
of what is being analyzed.

I suggest plotting the Desirable vs. Actual means.  Discuss
the highest, lowest, and most divergent.  Use paired t-tests.
You could show the items in the scatterplot, too, but I would
discuss them less.   Show the correlation among the scales.
If it is interesting to note that the average item score on two
scales are different, you could do paired t-tests on those, too.

After that, start worrying about the other personal scores.

I don't know how well self-study works for learning statistics
without someone serving as an occasional tutor or mentor,
but what you don't know about reliability suggests that you are
ion that position.  UCLA has a site, among others, that gives
tutorials on *doing* with SPSS.  Some posters to this list
have sites that they advertise.

--
Rich Ulrich

... snip, previous.