Login  Register

Re: Simple Main Effects Pairwise Comparisons vs Univariate Tests

Posted by Rich Ulrich on Jun 07, 2014; 5:36am
URL: http://spssx-discussion.165.s1.nabble.com/Simple-Main-Effects-Pairwise-Comparisons-vs-Univariate-Tests-tp5726323p5726388.html

I very often tell people to use the tests that they see in their literature.
But if the literature is screwed up  -- like, sticking the Baseline into the
repeated measures, when you won't be looking at linear trend -- I think
I would be able to mention that the literature often does this wrong, so that
they end up trying to untangle Interaction effects instead of looking at simpler
Main effects.  Test what you are interested in, and report it.  You might not feel
comfortable in ignoring that literature (or just ignoring the issue) unless you
can convince yourself that the more awkward approach is truly inferior.

I don't really follow which tests are being cited in the final paragraph.
"... with different p values" - If the two periods are not at all different in
means, the mean for both periods is what deserves reporting, not the individual
means and contrasts.  However, if you were saying that the p-values were different
from what the previous RM gave you, well, that is expected.  The covariate has
reduced the size of the error term for tests of between groups.

I don't know how you want to handle the ANOVA situation with 3 groups, with respect
to testing Experimental versus the others.  Rerun analyses with two?  I would want
to consider the a-priori status of the two controls: whatever was expected.  Is the overall
test for 3 groups necessary at all, or could it have been omitted?  Or, Does it justify
the further testing?

--
Rich Ulrich



Date: Fri, 6 Jun 2014 12:39:36 -0700
From: [hidden email]
Subject: Re: Simple Main Effects Pairwise Comparisons vs Univariate Tests
To: [hidden email]

Thank you all for your helpful feedback. I think the discussion has moved a bit away from what my original question was, but to answer some of your questions:

Yes, the design was a true experiment--participants were randomly assigned and do not differ significantly at pre-test.
Yes, I graphed the 9 means and the trend is as hypothesized--groups do not differ at pre-test and the experimental group appears to have a more dramatic increase at post-tests 1 and 2.
The control groups do not appear to be entirely parallel--one group was a treated control group and so it was expected that there may be some gain over time in that group relative to a control group who did not receive any interaction during the intervention phase.
Yes, I did a power analysis--unfortunately we were slightly below our target sample size.
I'm not opposed to the pre-test as a covariate. I only wonder how the results would be received. In virtually every publication on this topic I see ANOVAs presented with the interaction results followed by the pairwise comparisons.

My original question still remains though regarding how to report the results. The pairwise comparisons provide p values, and the univariate tests that are presented right after in the SPSS output provide F statistics with p values. I did run an ANCOVA with pre-test as a covariate and now both the pairwise comparisons between groups and the univariate tests for group are significant at both post-tests, but with different p values. In the other publications with this type of design with 3 groups, I see only F statistics reported when discussing group differences. Is this from the univariate test--even though that is not referring to the contrasts between only two groups?

Thanks!
[snip, previous]