http://spssx-discussion.165.s1.nabble.com/Advice-tp5430896p5433048.html
I'm going to go out on a limb here, as I'm still not terribly clear about the design. Assuming I understand, here are some preliminary thoughts...
First and foremost, in order to employ the MIXED procedure, the dataset needs to be structured in vertical format as follows:
ID Group Scenario Rating
---------------------------
1 1 1 missing**
1 1 2 score**
. . . .
. . . .
1 1 10 score
2 1 1 score
2 1 2 score
. . . .
. . . .
2 1 10 score
25 1 1 missing
25 1 2 score
. . . .
. . . .
25 1 10 score
1 2 1 score
1 2 2 score
. . . .
. . . .
1 2 10 score
2 2 1 score
2 2 2 score
. . . .
. . . .
2 2 10 score
6 2 1 missing
6 2 2 score
. .
. .
6 2 10 score
---------------------------
where
ID = Subject identification variable which starts at 1 for each Group
Group = Grouping indicator variable (1=Student, 2=Faculty member)
Scenario = Scenario indicator variable (1 through 10 Scenarios)
Rating = Scenario Ratings
**missing = missing response data
**score = valid response data
Although I have never tried the code below, I think it should test for group differences in mean ratings while estimating group-specific variance components. In other words, the model assumes within-subject correlation, which is permitted to differ across groups. Hope that makes sense.
mixed Rating by Group
/fixed=Group
/method=reml
/print=solution
/random=Group | subject(ID) covtype(diag).
It should be noted that the estimated variance components will be biased given the small sample sizes, especially the variance component for the faculty group. As I think about this study, I question whether my proposed code is the optimal approach. Anyway, no time to think about it further right now.
If someone thinks I've misunderstood a fundamental issue, please write back and I will try to adjust the MIXED code accordingly to help the OP.
Ryan