Login  Register

Re: Simple Main Effects Pairwise Comparisons vs Univariate Tests

Posted by statshelp on Jun 04, 2014; 7:29pm
URL: http://spssx-discussion.165.s1.nabble.com/Simple-Main-Effects-Pairwise-Comparisons-vs-Univariate-Tests-tp5726323p5726336.html

Dear Melissa and Rich,

Thank you for your feedback. The syntax I used was:

GLM FBTotPre FBTotPost1 FBTotPost2 BY Group
   /WSFACTOR=Time 3 Polynomial
   /MEASURE=FBtot
   /METHOD=SSTYPE(3)
   /POSTHOC=Group(LSD BONFERRONI)
   /PLOT=PROFILE(Time*Group)
   /EMMEANS=TABLES(OVERALL)
   /EMMEANS=TABLES(Group) COMPARE ADJ(LSD)
   /EMMEANS=TABLES(Time) COMPARE ADJ(LSD)
   /EMMEANS=TABLES(Group*Time) COMPARE (GROUP) ADJ(LSD)
   /PRINT=DESCRIPTIVE ETASQ OPOWER HOMOGENEITY
   /CRITERIA=ALPHA(.05)
   /WSDESIGN=Time
   /DESIGN=Group.

To better explain the design, this was an intervention with three time points (1 pre-test and 2 post-tests). There was one experimental group and two control groups. There was a significant main effect of time (kids in all groups increased in the skill assessed over time, which is expected), but that is not is not the central research question. I am interested in the TimeXGroup interaction, and plotted means show that the experimental group appeared to gain more dramatically in the skill assessed compared to two control groups. The pairwise comparisons following the significant interaction showed that the experimental group did better than control group 1 at Time 2 (post-test 1; p = .03) and at Time 3 (post-test 2; p = .047). However, the univariate tests are not significant at Time 2 (p = .067) or Time 3 (p = .14). Perhaps this is an issue of power as Rich suggests? My question is how to report these results. What exactly is reported for the pairwise comparisons since there is no statistic in the output? What do you do when there are significant pairwise comparisons but not a significant Univariate Test?

Thanks!
Virginia


On Wed, Jun 4, 2014 at 12:23 PM, Rich Ulrich [via SPSSX Discussion] <[hidden email]> wrote:
More detail would help, for seeing entirely what you have done, but the questions
at the end are not too tough.

In reverse order: 
(Explaining...) Plot your means.  Is there a simple description?  If not, you might
economically "explain these findings" as Type II error.

(Univariate...) Univariate tests on 3 groups do not have to show 2-group effects
as "significant" when there were barely differences on 2 groups tested alone.
The extra power from having fewer d.f.  being tested is one reason why designs
with two groups are inherently superior to designs with three groups.

(interaction) The interaction is tested using within-subject variation, which your
followup tests do not.  Trends across time are easier to test and report when you
can use the linear trend component.  If you do not expect "linear" as the nature
of your change, perhaps the design is wrong:  you could use Baseline as a covariate,
or else test primarily Base versus Other.  Or, if you do not expect change across
time, is there some other reason to be interested in random effects seen when
doing a lot of tests?

--
Rich Ulrich

> Date: Tue, 3 Jun 2014 16:29:27 -0700

> From: [hidden email]

> Subject: Simple Main Effects Pairwise Comparisons vs Univariate Tests
> To: [hidden email]

>
> I have an experimental design with Time as the within-subjects factor (3
> levels) and Group as the between-subjects factor (3 levels). There is a
> significant interaction. When examining the simple main effects, I find a
> significant difference between two of the groups at Time 2 and Time 3. My
> question is why the Univariate Tests do not show a significant effect at
> Time 2 or Time 3. What do I report and how do I explain these findings?
> Thanks!
>
>



To unsubscribe from Simple Main Effects Pairwise Comparisons vs Univariate Tests, click here.
NAML