http://spssx-discussion.165.s1.nabble.com/Simple-Main-Effects-Pairwise-Comparisons-vs-Univariate-Tests-tp5726323p5726354.html
You want to know if Experimental differs from Control at post1 and post2,
controlling for Pre. The design you ran buries the test of interest in an
interaction term, which is the sort of complication that should be avoided
whenever possible. You get a weak test, since it measures something ELSE
plus what you want.
Test just the Post periods as Repeats, using Pre as covariate. That tests
what you want, and leaves pretty simple descriptions. The error is reduced
for subsequent tests by the amount of the (high) correlation of Pre with Post.
Or, is there a reason you don't like that?
For this design, the interaction is measuring the difference between the Post
periods, which is probably less interesting than other Period effects.
Also, consider the purpose or character of the TWO controls. If they are
"very similar", you might justify combining the two controls, and then you
have a two-group result to describe, which is even easier than three.
--
Rich Ulrich
Date: Wed, 4 Jun 2014 12:29:54 -0700
From:
[hidden email]Subject: Re: Simple Main Effects Pairwise Comparisons vs Univariate Tests
To:
[hidden email]Dear Melissa and Rich,
Thank you for your feedback. The syntax I used was:
GLM FBTotPre FBTotPost1 FBTotPost2 BY Group
/WSFACTOR=Time 3 Polynomial
/MEASURE=FBtot
/METHOD=SSTYPE(3)
/POSTHOC=Group(LSD BONFERRONI)
/PLOT=PROFILE(Time*Group)
/EMMEANS=TABLES(OVERALL)
/EMMEANS=TABLES(Group) COMPARE ADJ(LSD)
/EMMEANS=TABLES(Time) COMPARE ADJ(LSD)
/EMMEANS=TABLES(Group*Time) COMPARE (GROUP) ADJ(LSD)
/PRINT=DESCRIPTIVE ETASQ OPOWER HOMOGENEITY
/CRITERIA=ALPHA(.05)
/WSDESIGN=Time
/DESIGN=Group.
To better explain the design, this was an intervention with three time points (1 pre-test and 2 post-tests). There was one experimental group and two control groups. There was a significant main effect of time (kids in all groups increased in the skill assessed over time, which is expected), but that is not is not the central research question. I am interested in the TimeXGroup interaction, and plotted means show that the experimental group appeared to gain more dramatically in the skill assessed compared to two control groups. The pairwise comparisons following the significant interaction showed that the experimental group did better than control group 1 at Time 2 (post-test 1; p = .03) and at Time 3 (post-test 2; p = .047). However, the univariate tests are not significant at Time 2 (p = .067) or Time 3 (p = .14). Perhaps this is an issue of power as Rich suggests? My question is how to report these results. What exactly is reported for the pairwise comparisons since there is no statistic in the output? What do you do when there are significant pairwise comparisons but not a significant Univariate Test?
Thanks!
Virginia
View this message in context:
Re: Simple Main Effects Pairwise Comparisons vs Univariate Tests
Sent from the
SPSSX Discussion mailing list archive at Nabble.com.