|
Sorry, yes, you would be misusing ICC in this case.
There is no way of saying anything about the "consistency of raters across cases" when you have exactly 1 case. Instead, each rater has given their ratings, for something like "perceived climate." Are those organized into factors, to make up a few subscales? - That is something you should try to do, because (a) you really don't have 28 separate hypotheses of equal importance, and (b) individual items have relatively poor reliability. You can use procedure Reliability to get Cronbach's alpha, to measure the internal consistency of scales you derive. You have some personal characteristics of those 100 that you intend to relate to one or more of the subscales How many characteristics? For what purposes? You can look at correlations between pairs of variables, and you probably have some place where multiple regression could be useful. But there is no application that I see for ICC. -- Rich Ulrich Date: Tue, 9 Oct 2012 10:06:52 -0700 From: [hidden email] Subject: icc in spss To: [hidden email]
|
|
Why are you doing this analysis? Has someone tried to tell you
what it should be? You do not have the two-say design needed for an ICC. - In a two-way design of Raters by Companies, where each rater rates each company, you might essentially use the large difference between companies to demonstrate that Raters are seeing the same thing. That is the ICC. If the companies are all similar, you will not see a high ICC. After this post, I am not sure whether you have one company or several companies, but I think each person is rating only one company. This would allow you to make a statement about "discriminative validity" if there are several companies. I would assume that the rating scale was designed to reflect latent factors among companies, not among raters. Especially if there is only one company, what a factor analysis could show you would *only* be the psychological factors for Raters. If there are only a few companies, there would still be a large influence for Raters, rather than Companies. So, I would say that there is an excuse to use the presumed scales, even if they do not yield a high Cronbach. I don't understand what you are after, using the "desire" scoring. Are you looking for "Dissatisfaction"? -- Rich Ulrich Date: Tue, 9 Oct 2012 14:42:48 -0700 From: [hidden email] Subject: Re: icc in spss To: [hidden email]
|
|
There are many different kinds of "reliability" -- inter-rater
(including ICC), pre-post, construct and differential are several of the kinds that you do not have the data for. What you have, with everyone rating the same company, is pretty much limited to Cronbach's alpha, the reliability estimated by internal consistency for a scale. What can you do with these data, including the "desirable" set? I would compute the average item scores for some factors, just to minimize the chance outcomes by using a smaller number of main analyses, and by increasing the reliability of what is being analyzed. I suggest plotting the Desirable vs. Actual means. Discuss the highest, lowest, and most divergent. Use paired t-tests. You could show the items in the scatterplot, too, but I would discuss them less. Show the correlation among the scales. If it is interesting to note that the average item score on two scales are different, you could do paired t-tests on those, too. After that, start worrying about the other personal scores. I don't know how well self-study works for learning statistics without someone serving as an occasional tutor or mentor, but what you don't know about reliability suggests that you are ion that position. UCLA has a site, among others, that gives tutorials on *doing* with SPSS. Some posters to this list have sites that they advertise. -- Rich Ulrich ... snip, previous. |
Free forum by Nabble | Edit this page |