The problem that I have with that too-common test is that the
test-results do not tell you what you want to know. Even if there is
a big effect, you end up with all three tests as "significant" and you
still have to look at the means to find out what is going on.
What seems to be the natural test is to use the Pre as the covariate,
while testing the difference in Post. That does have an assumption that
the two groups do /not/ differ at Pre. Under randomization, that will usually
be the case. If groups differ at Pre, then you do have to admit that the
conditions for ideal testing do not exist. What you can fairly conclude will
depend, then, on looking at the means. Similarly, if you there are differences
in variance, you might need to make explanations based on knowledge of what
you are measuring and what the nature of change was expected to be. (For
instance, "bad scaling" could be a problem that makes variances nearly constant,
either for both groups at Pre, or for the Treated group at Post.)
--
Rich Ulrich
> Date: Mon, 25 Apr 2016 22:39:39 -0700
> From:
[hidden email]
> Subject: 2way mixed ANOVA significant interaction, but main effects not significant. What does this mean
> To:
[hidden email]
>
> Hello,
>
> I performed a 2way mixed ANOVA with "group" (intervention and control) as
> the between subjects independent variable and "time" (pre-test and
> post-test) as the within the subjects IV. In the results I got a significant
> interaction between the two IVs, but main effects for either of them are not
> significant. how can I interpret this?
>
> Thank you.
>
> Nitya
>
===================== To manage your subscription to SPSSX-L, send a message to
(not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD