http://spssx-discussion.165.s1.nabble.com/Simple-Main-Effects-Pairwise-Comparisons-vs-Univariate-Tests-tp5726323p5726359.html
Has somebody declared there is no longer any need to satisfy the homogeneity of slopes assumption before ANCOVA? I keep seeing presentations and publications where everything is “adjusted” for covariates like “pre” or “age” and there is no mention of the test to see if group slopes are NSD. How can one adjust means from a common regression line when subgroups may have completely opposing slopes?
I did check with a former stats consultant faculty at U. Waterloo — now retired, and she was rather scathing about what she saw as cavalier application of ANCOVA for “adjustment” of group means. I have even seen one solution for situations where slopes are sig. different (in another forum): “just don’t call it ANCOVA” !
I’d be interested in feedback, since some colleagues stuck my name on a ms and were surprised when I queried them about the pre-condition for ANCOVA. Am I olde fashioned or misinformed?
Ian D. Martin, Ph.D.
This e-mail and any attachments may contain confidential information. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this e-mail and destroy any copies. Any dissemination or use of this information by a person other than the intended recipient is unauthorized and may be illegal.
Ce message électronique et les fichiers qui y sont joints peuvent contenir des renseignements confidentiels. Si vous n'êtes pas le destinataire visé, veuillez en aviser immédiatement l'expéditeur en répondant à ce message; effacez ensuite le message et détruisez toute copie. La diffusion ou l'usage de ces renseignements par une personne autre que le destinataire visé n'est pas autorisé et peut constituer un acte illégal.
> I agree with Rich that Pre should be treated as a covariate.
>
> To the OP, the authors of this letter have suggested centering the covariate
> (the Pre score in this case) when performing repeated measures ANCOVA with
> SPSS.
>
>
http://www.researchgate.net/publication/233727939_Use_of_covariates_in_randomized_controlled_trials/file/79e4150acf478c492e.pdf>
> HTH.
>
>
>
> Rich Ulrich wrote
>> You want to know if Experimental differs from Control at post1 and post2,
>> controlling for Pre. The design you ran buries the test of interest in an
>> interaction term, which is the sort of complication that should be avoided
>> whenever possible. You get a weak test, since it measures something ELSE
>> plus what you want.
>>
>> Test just the Post periods as Repeats, using Pre as covariate. That tests
>> what you want, and leaves pretty simple descriptions. The error is
>> reduced
>> for subsequent tests by the amount of the (high) correlation of Pre with
>> Post.
>> Or, is there a reason you don't like that?
>>
>> For this design, the interaction is measuring the difference between the
>> Post
>> periods, which is probably less interesting than other Period effects.
>>
>> Also, consider the purpose or character of the TWO controls. If they are
>> "very similar", you might justify combining the two controls, and then you
>> have a two-group result to describe, which is even easier than three.
>>
>> --
>> Rich Ulrich
>>
>> Date: Wed, 4 Jun 2014 12:29:54 -0700
>> From:
>
>> vltompkins@
>
>> Subject: Re: Simple Main Effects Pairwise Comparisons vs Univariate Tests
>> To:
>
>> SPSSX-L@.UGA
>
>>
>> Dear Melissa and Rich,
>>
>> Thank you for your feedback. The syntax I used was:
>>
>> GLM FBTotPre FBTotPost1 FBTotPost2 BY Group
>> /WSFACTOR=Time 3 Polynomial
>> /MEASURE=FBtot
>> /METHOD=SSTYPE(3)
>>
>> /POSTHOC=Group(LSD BONFERRONI)
>> /PLOT=PROFILE(Time*Group)
>> /EMMEANS=TABLES(OVERALL)
>> /EMMEANS=TABLES(Group) COMPARE ADJ(LSD)
>> /EMMEANS=TABLES(Time) COMPARE ADJ(LSD)
>> /EMMEANS=TABLES(Group*Time) COMPARE (GROUP) ADJ(LSD)
>>
>> /PRINT=DESCRIPTIVE ETASQ OPOWER HOMOGENEITY
>> /CRITERIA=ALPHA(.05)
>> /WSDESIGN=Time
>> /DESIGN=Group.
>>
>> To better explain the design, this was an intervention with three time
>> points (1 pre-test and 2 post-tests). There was one experimental group and
>> two control groups. There was a significant main effect of time (kids in
>> all groups increased in the skill assessed over time, which is expected),
>> but that is not is not the central research question. I am interested in
>> the TimeXGroup interaction, and plotted means show that the experimental
>> group appeared to gain more dramatically in the skill assessed compared to
>> two control groups. The pairwise comparisons following the significant
>> interaction showed that the experimental group did better than control
>> group 1 at Time 2 (post-test 1; p = .03) and at Time 3 (post-test 2; p =
>> .047). However, the univariate tests are not significant at Time 2 (p =
>> .067) or Time 3 (p = .14). Perhaps this is an issue of power as Rich
>> suggests? My question is how to report these results. What exactly is
>> reported for the pairwise comparisons since there is no statistic in the
>> output? What do you do when there are significant pairwise comparisons but
>> not a significant Univariate Test?
>>
>>
>> Thanks!
>> Virginia
>>
>>
>> On Wed, Jun 4, 2014 at 12:23 PM, Rich Ulrich [via SPSSX Discussion]
>> <[hidden email]> wrote:
>>
>>
>>
>>
>>
>>
>> More detail would help, for seeing entirely what you have done, but the
>> questions
>> at the end are not too tough.
>>
>> In reverse order:
>> (Explaining...) Plot your means. Is there a simple description? If not,
>> you might
>>
>> economically "explain these findings" as Type II error.
>>
>> (Univariate...) Univariate tests on 3 groups do not have to show 2-group
>> effects
>> as "significant" when there were barely differences on 2 groups tested
>> alone.
>>
>> The extra power from having fewer d.f. being tested is one reason why
>> designs
>> with two groups are inherently superior to designs with three groups.
>>
>> (interaction) The interaction is tested using within-subject variation,
>> which your
>>
>> followup tests do not. Trends across time are easier to test and report
>> when you
>> can use the linear trend component. If you do not expect "linear" as the
>> nature
>> of your change, perhaps the design is wrong: you could use Baseline as a
>> covariate,
>>
>> or else test primarily Base versus Other. Or, if you do not expect change
>> across
>> time, is there some other reason to be interested in random effects seen
>> when
>> doing a lot of tests?
>>
>> --
>> Rich Ulrich
>>
>>
>>> Date: Tue, 3 Jun 2014 16:29:27 -0700
>>> From: [hidden email]
>>
>>> Subject: Simple Main Effects Pairwise Comparisons vs Univariate Tests
>>> To: [hidden email]
>>
>>>
>>> I have an experimental design with Time as the within-subjects factor (3
>>> levels) and Group as the between-subjects factor (3 levels). There is a
>>> significant interaction. When examining the simple main effects, I find a
>>
>>> significant difference between two of the groups at Time 2 and Time 3. My
>>> question is why the Univariate Tests do not show a significant effect at
>>> Time 2 or Time 3. What do I report and how do I explain these findings?
>>
>>> Thanks!
>>>
>>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> If you reply to this email, your message will be added to
>> the discussion below:
>>
>>
http://spssx-discussion.1045642.n5.nabble.com/Simple-Main-Effects-Pairwise-Comparisons-vs-Univariate-Tests-tp5726323p5726331.html>>
>>
>>
>> To unsubscribe from Simple Main Effects Pairwise
>> Comparisons vs Univariate Tests, click here.
>>
>>
>> NAML
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> View this message in context: Re: Simple Main Effects Pairwise Comparisons
>> vs Univariate Tests
>>
>> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
>
>
>
>
> -----
> --
> Bruce Weaver
>
[hidden email]
>
http://sites.google.com/a/lakeheadu.ca/bweaver/>
> "When all else fails, RTFM."
>
> NOTE: My Hotmail account is not monitored regularly.
> To send me an e-mail, please use the address shown above.
>
> --
> View this message in context:
http://spssx-discussion.1045642.n5.nabble.com/Simple-Main-Effects-Pairwise-Comparisons-vs-Univariate-Tests-tp5726323p5726355.html> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
>
[hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
command. To leave the list, send the command