Re: Advice - Follow-up

Posted by Ryan on
URL: http://spssx-discussion.165.s1.nabble.com/Advice-tp5430896p5433048.html

I'm going to go out on a limb here, as I'm still not terribly clear about the design. Assuming I understand, here are some preliminary thoughts...

First and foremost, in order to employ the MIXED procedure, the dataset needs to be structured in vertical format as follows:

ID  Group Scenario  Rating
---------------------------
1     1       1     missing**
1     1       2      score**
.     .       .        .
.     .       .        . 
1     1      10      score
2     1       1      score
2     1       2      score
.     .       .        .
.     .       .        . 
2     1      10      score
25    1       1     missing
25    1       2      score
.     .       .        .
.     .       .        .   
25    1      10      score
1     2       1      score
1     2       2      score
.     .       .        . 
.     .       .        .
1     2      10      score  
2     2       1      score
2     2       2      score
.     .       .        .
.     .       .        . 
2     2      10      score
6     2       1     missing
6     2       2      score
.                      .
.                      . 
6     2      10      score
---------------------------

where

ID = Subject identification variable which starts at 1 for each Group
Group = Grouping indicator variable (1=Student, 2=Faculty member)
Scenario = Scenario indicator variable (1 through 10 Scenarios)
Rating = Scenario Ratings
**missing = missing response data
**score = valid response data

Although I have never tried the code below, I think it should test for group differences in mean ratings while estimating group-specific variance components. In other words, the model assumes within-subject correlation, which is permitted  to differ across groups. Hope that makes sense.

mixed Rating by Group  
  /fixed=Group
  /method=reml
  /print=solution
  /random=Group | subject(ID) covtype(diag).

It should be noted that the estimated variance components will be biased given the small sample sizes, especially the variance component for the faculty group. As I think about this study, I question whether my proposed code is the optimal approach. Anyway, no time to think about it further right now. 

If someone thinks I've misunderstood a fundamental issue, please write back and I will try to adjust the MIXED code accordingly to help the OP.

Ryan

On Wed, Jan 25, 2012 at 2:49 PM, Rich Ulrich <[hidden email]> wrote:
To be more clear than I was -- I was referring to a design
that is basically repeated measures,  Scenes (10) by Group (2);
and it needs to be analyzed with IDs specified, since IDs are not
balanced.  (Presumably, each ID had several ratings.)

If your table does not reveal some coding errors, there are at
least 6 different faculty members, though there are usually only
5 ratings.  You did not confirm that the same 25 students did all
25 Resident ratings.  If there were a lot more raters than that,
the analysis could have difficulties from sparseness. 

I think you would set this up using MIXED, specifying that IDs
are collected within Group; but I don't have the syntax. 

It is certainly "legitimate" to do an analysis with small N, even
when that analysis lacks power.  But 50 ratings across 5 or 6
raters is not especially "few", in the general universe of studies.

--
Rich Ulrich


Date: Wed, 25 Jan 2012 14:25:14 -0500

From: [hidden email]
Subject: Re: Advice - Follow-up
To: [hidden email]

Yes -- correct -- our hypothesis is that the faculty will consistently be significantly different (across all scenarios) than learners.....
Can I legitimately do an "unbalanced" ANOVA with such few faculty? Thanks --jennifer
[snip, previous]