Re: Simple Main Effects Pairwise Comparisons vs Univariate Tests
Posted by
statshelp on
Jun 06, 2014; 7:39pm
URL: http://spssx-discussion.165.s1.nabble.com/Simple-Main-Effects-Pairwise-Comparisons-vs-Univariate-Tests-tp5726323p5726379.html
Thank you all for your helpful feedback. I think the discussion has moved a bit away from what my original question was, but to answer some of your questions:
Yes, the design was a true experiment--participants were randomly assigned and do not differ significantly at pre-test.
Yes, I graphed the 9 means and the trend is as hypothesized--groups do not differ at pre-test and the experimental group appears to have a more dramatic increase at post-tests 1 and 2.
The control groups do not appear to be entirely parallel--one group was a treated control group and so it was expected that there may be some gain over time in that group relative to a control group who did not receive any interaction during the intervention phase.
Yes, I did a power analysis--unfortunately we were slightly below our target sample size.
I'm not opposed to the pre-test as a covariate. I only wonder how the results would be received. In virtually every publication on this topic I see ANOVAs presented with the interaction results followed by the pairwise comparisons.
My original question still remains though regarding how to report the results. The pairwise comparisons provide p values, and the univariate tests that are presented right after in the SPSS output provide F statistics with p values. I did run an ANCOVA with pre-test as a covariate and now both the pairwise comparisons between groups and the univariate tests for group are significant at both post-tests, but with different p values. In the other publications with this type of design with 3 groups, I see only F statistics reported when discussing group differences. Is this from the univariate test--even though that is not referring to the contrasts between only two groups?
Thanks!