I have an experimental design with Time as the within-subjects factor (3 levels) and Group as the between-subjects factor (3 levels). There is a significant interaction. When examining the simple main effects, I find a significant difference between two of the groups at Time 2 and Time 3. My question is why the Univariate Tests do not show a significant effect at Time 2 or Time 3. What do I report and how do I explain these findings? Thanks!
|
You'll need to say more about what you actually used to examine the simple main effects
Could it be that the former is looking at the slope between times while the latter looks at the actual state at the time point? Melissa -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of statshelp Sent: Tuesday, June 03, 2014 7:29 PM To: [hidden email] Subject: [SPSSX-L] Simple Main Effects Pairwise Comparisons vs Univariate Tests I have an experimental design with Time as the within-subjects factor (3 levels) and Group as the between-subjects factor (3 levels). There is a significant interaction. When examining the simple main effects, I find a significant difference between two of the groups at Time 2 and Time 3. My question is why the Univariate Tests do not show a significant effect at Time 2 or Time 3. What do I report and how do I explain these findings? Thanks! -- View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Simple-Main-Effects-Pairwise-Comparisons-vs-Univariate-Tests-tp5726323.html Sent from the SPSSX Discussion mailing list archive at Nabble.com. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD This correspondence contains proprietary information some or all of which may be legally privileged; it is for the intended recipient only. If you are not the intended recipient you must not use, disclose, distribute, copy, print, or rely on this correspondence and completely dispose of the correspondence immediately. Please notify the sender if you have received this email in error. NOTE: Messages to or from the State of Connecticut domain may be subject to the Freedom of Information statutes and regulations. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
Virginia, What commands are you running for the two analyses? From what you say below, the difference is one is paired and the other is not. IF that is the case, it would suggest that pairing matters.
However, list members can help better with clearer details (preferably syntax) about what was run. Some thoughts to consider…. Is it logical to pair cases in your groups? i.e. height across time for children is very logical to pair and illogical not to pair cases-in this case the overall
groups may not differ at each time, but pairing takes into account the variation of values within each sample.
Apparently your groups have different variances. (i.e. continuing the height example, one group may be same grade students (low variance) and another may be
multiple grade students (high variance). Is that difference in variances expected or unexpected? (Why?) Is the difference in means meaningful? (What does it mean?) And a key consideration which may be yours or someone else’s: Does that matter? (So what?) Melissa From: Virginia Tompkins [mailto:[hidden email]]
Hello Melissa, I'm not sure if I understand what you are asking, but I don't think that is the case. The pairwise comparisons are comparing the mean differences between each of the groups at each of the 3 time points with a significant difference in the
means of two of the groups at Time 2 and Time 3. The univariate tests present the F and p-value for each of the 3 Times, but none is significant. I suppose it is possible that there is a significant interaction, with some of the group comparisons at some of
the time points being significant, without each time point having a significant group effect? Does that make sense? If that is the case, how do I report the findings of the simple main effects and the univariate tests? Thanks! Virginia On Wed, Jun 4, 2014 at 9:30 AM, Ives, Melissa L <[hidden email]> wrote: You'll need to say more about what you actually used to examine the simple main effects This correspondence contains proprietary information some or all of which may be legally privileged; it is for the intended recipient only. If you are not the intended recipient you must not use, disclose, distribute, copy, print, or rely on this correspondence and completely dispose of the correspondence immediately. Please notify the sender if you have received this email in error. NOTE: Messages to or from the State of Connecticut domain may be subject to the Freedom of Information statutes and regulations. |
In reply to this post by statshelp
More detail would help, for seeing entirely what you have done, but the questions
at the end are not too tough. In reverse order: (Explaining...) Plot your means. Is there a simple description? If not, you might economically "explain these findings" as Type II error. (Univariate...) Univariate tests on 3 groups do not have to show 2-group effects as "significant" when there were barely differences on 2 groups tested alone. The extra power from having fewer d.f. being tested is one reason why designs with two groups are inherently superior to designs with three groups. (interaction) The interaction is tested using within-subject variation, which your followup tests do not. Trends across time are easier to test and report when you can use the linear trend component. If you do not expect "linear" as the nature of your change, perhaps the design is wrong: you could use Baseline as a covariate, or else test primarily Base versus Other. Or, if you do not expect change across time, is there some other reason to be interested in random effects seen when doing a lot of tests? -- Rich Ulrich > Date: Tue, 3 Jun 2014 16:29:27 -0700 > From: [hidden email] > Subject: Simple Main Effects Pairwise Comparisons vs Univariate Tests > To: [hidden email] > > I have an experimental design with Time as the within-subjects factor (3 > levels) and Group as the between-subjects factor (3 levels). There is a > significant interaction. When examining the simple main effects, I find a > significant difference between two of the groups at Time 2 and Time 3. My > question is why the Univariate Tests do not show a significant effect at > Time 2 or Time 3. What do I report and how do I explain these findings? > Thanks! > > |
Dear Melissa and Rich, Thank you for your feedback. The syntax I used was: GLM FBTotPre FBTotPost1 FBTotPost2 BY Group /WSFACTOR=Time 3 Polynomial /MEASURE=FBtot /METHOD=SSTYPE(3) /POSTHOC=Group(LSD BONFERRONI) /PLOT=PROFILE(Time*Group) /EMMEANS=TABLES(OVERALL) /EMMEANS=TABLES(Group) COMPARE ADJ(LSD) /EMMEANS=TABLES(Time) COMPARE ADJ(LSD) /EMMEANS=TABLES(Group*Time) COMPARE (GROUP) ADJ(LSD) /PRINT=DESCRIPTIVE ETASQ OPOWER HOMOGENEITY /CRITERIA=ALPHA(.05) /WSDESIGN=Time /DESIGN=Group. To better explain the design, this was an intervention with three time points (1 pre-test and 2 post-tests). There was one experimental group and two control groups. There was a significant main effect of time (kids in all groups increased in the skill assessed over time, which is expected), but that is not is not the central research question. I am interested in the TimeXGroup interaction, and plotted means show that the experimental group appeared to gain more dramatically in the skill assessed compared to two control groups. The pairwise comparisons following the significant interaction showed that the experimental group did better than control group 1 at Time 2 (post-test 1; p = .03) and at Time 3 (post-test 2; p = .047). However, the univariate tests are not significant at Time 2 (p = .067) or Time 3 (p = .14). Perhaps this is an issue of power as Rich suggests? My question is how to report these results. What exactly is reported for the pairwise comparisons since there is no statistic in the output? What do you do when there are significant pairwise comparisons but not a significant Univariate Test? Thanks! Virginia On Wed, Jun 4, 2014 at 12:23 PM, Rich Ulrich [via SPSSX Discussion] <[hidden email]> wrote:
|
You want to know if Experimental differs from Control at post1 and post2,
controlling for Pre. The design you ran buries the test of interest in an interaction term, which is the sort of complication that should be avoided whenever possible. You get a weak test, since it measures something ELSE plus what you want. Test just the Post periods as Repeats, using Pre as covariate. That tests what you want, and leaves pretty simple descriptions. The error is reduced for subsequent tests by the amount of the (high) correlation of Pre with Post. Or, is there a reason you don't like that? For this design, the interaction is measuring the difference between the Post periods, which is probably less interesting than other Period effects. Also, consider the purpose or character of the TWO controls. If they are "very similar", you might justify combining the two controls, and then you have a two-group result to describe, which is even easier than three. -- Rich Ulrich Date: Wed, 4 Jun 2014 12:29:54 -0700 From: [hidden email] Subject: Re: Simple Main Effects Pairwise Comparisons vs Univariate Tests To: [hidden email] Dear Melissa and Rich, Thank you for your feedback. The syntax I used was: GLM FBTotPre FBTotPost1 FBTotPost2 BY Group /WSFACTOR=Time 3 Polynomial /MEASURE=FBtot /METHOD=SSTYPE(3) /POSTHOC=Group(LSD BONFERRONI) /PLOT=PROFILE(Time*Group) /EMMEANS=TABLES(OVERALL) /EMMEANS=TABLES(Group) COMPARE ADJ(LSD) /EMMEANS=TABLES(Time) COMPARE ADJ(LSD) /EMMEANS=TABLES(Group*Time) COMPARE (GROUP) ADJ(LSD) /PRINT=DESCRIPTIVE ETASQ OPOWER HOMOGENEITY /CRITERIA=ALPHA(.05) /WSDESIGN=Time /DESIGN=Group. To better explain the design, this was an intervention with three time points (1 pre-test and 2 post-tests). There was one experimental group and two control groups. There was a significant main effect of time (kids in all groups increased in the skill assessed over time, which is expected), but that is not is not the central research question. I am interested in the TimeXGroup interaction, and plotted means show that the experimental group appeared to gain more dramatically in the skill assessed compared to two control groups. The pairwise comparisons following the significant interaction showed that the experimental group did better than control group 1 at Time 2 (post-test 1; p = .03) and at Time 3 (post-test 2; p = .047). However, the univariate tests are not significant at Time 2 (p = .067) or Time 3 (p = .14). Perhaps this is an issue of power as Rich suggests? My question is how to report these results. What exactly is reported for the pairwise comparisons since there is no statistic in the output? What do you do when there are significant pairwise comparisons but not a significant Univariate Test? Thanks! Virginia On Wed, Jun 4, 2014 at 12:23 PM, Rich Ulrich [via SPSSX Discussion] <[hidden email]> wrote:
View this message in context: Re: Simple Main Effects Pairwise Comparisons vs Univariate Tests Sent from the SPSSX Discussion mailing list archive at Nabble.com. |
Administrator
|
I agree with Rich that Pre should be treated as a covariate.
To the OP, the authors of this letter have suggested centering the covariate (the Pre score in this case) when performing repeated measures ANCOVA with SPSS. http://www.researchgate.net/publication/233727939_Use_of_covariates_in_randomized_controlled_trials/file/79e4150acf478c492e.pdf HTH.
--
Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." PLEASE NOTE THE FOLLOWING: 1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. 2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/). |
In reply to this post by statshelp
"There was one experimental group and two control groups."
Are these actually "experimental" and "control" groups? I.e., was group membership assigned by a random process? Or were these "treatment" and comparison groups? If the latter what defines the groups? ----- Plot the 9 means with time on the horizontal (X) axis, the DV on the vertical (Y) axis, with the points for groups connected. By eye, do the lines appear very far from parallel? Is there a visually clear difference in the changes? Add vertical error wings to the points, eyeball the parallelism again. What do you see?
Art Kendall
Social Research Consultants |
In reply to this post by Bruce Weaver
Has somebody declared there is no longer any need to satisfy the homogeneity of slopes assumption before ANCOVA? I keep seeing presentations and publications where everything is “adjusted” for covariates like “pre” or “age” and there is no mention of the test to see if group slopes are NSD. How can one adjust means from a common regression line when subgroups may have completely opposing slopes?
I did check with a former stats consultant faculty at U. Waterloo — now retired, and she was rather scathing about what she saw as cavalier application of ANCOVA for “adjustment” of group means. I have even seen one solution for situations where slopes are sig. different (in another forum): “just don’t call it ANCOVA” ! I’d be interested in feedback, since some colleagues stuck my name on a ms and were surprised when I queried them about the pre-condition for ANCOVA. Am I olde fashioned or misinformed? Ian Martin Ian D. Martin, Ph.D. Data Analysis & Environmental Consulting <[hidden email]> _________________________________________________________________ This e-mail and any attachments may contain confidential information. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this e-mail and destroy any copies. Any dissemination or use of this information by a person other than the intended recipient is unauthorized and may be illegal. Ce message électronique et les fichiers qui y sont joints peuvent contenir des renseignements confidentiels. Si vous n'êtes pas le destinataire visé, veuillez en aviser immédiatement l'expéditeur en répondant à ce message; effacez ensuite le message et détruisez toute copie. La diffusion ou l'usage de ces renseignements par une personne autre que le destinataire visé n'est pas autorisé et peut constituer un acte illégal. _________________________________________________________________ On 05Jun, 2014, at 7:40 AM, Bruce Weaver <[hidden email]> wrote: > I agree with Rich that Pre should be treated as a covariate. > > To the OP, the authors of this letter have suggested centering the covariate > (the Pre score in this case) when performing repeated measures ANCOVA with > SPSS. > > http://www.researchgate.net/publication/233727939_Use_of_covariates_in_randomized_controlled_trials/file/79e4150acf478c492e.pdf > > HTH. > > > > Rich Ulrich wrote >> You want to know if Experimental differs from Control at post1 and post2, >> controlling for Pre. The design you ran buries the test of interest in an >> interaction term, which is the sort of complication that should be avoided >> whenever possible. You get a weak test, since it measures something ELSE >> plus what you want. >> >> Test just the Post periods as Repeats, using Pre as covariate. That tests >> what you want, and leaves pretty simple descriptions. The error is >> reduced >> for subsequent tests by the amount of the (high) correlation of Pre with >> Post. >> Or, is there a reason you don't like that? >> >> For this design, the interaction is measuring the difference between the >> Post >> periods, which is probably less interesting than other Period effects. >> >> Also, consider the purpose or character of the TWO controls. If they are >> "very similar", you might justify combining the two controls, and then you >> have a two-group result to describe, which is even easier than three. >> >> -- >> Rich Ulrich >> >> Date: Wed, 4 Jun 2014 12:29:54 -0700 >> From: > >> vltompkins@ > >> Subject: Re: Simple Main Effects Pairwise Comparisons vs Univariate Tests >> To: > >> SPSSX-L@.UGA > >> >> Dear Melissa and Rich, >> >> Thank you for your feedback. The syntax I used was: >> >> GLM FBTotPre FBTotPost1 FBTotPost2 BY Group >> /WSFACTOR=Time 3 Polynomial >> /MEASURE=FBtot >> /METHOD=SSTYPE(3) >> >> /POSTHOC=Group(LSD BONFERRONI) >> /PLOT=PROFILE(Time*Group) >> /EMMEANS=TABLES(OVERALL) >> /EMMEANS=TABLES(Group) COMPARE ADJ(LSD) >> /EMMEANS=TABLES(Time) COMPARE ADJ(LSD) >> /EMMEANS=TABLES(Group*Time) COMPARE (GROUP) ADJ(LSD) >> >> /PRINT=DESCRIPTIVE ETASQ OPOWER HOMOGENEITY >> /CRITERIA=ALPHA(.05) >> /WSDESIGN=Time >> /DESIGN=Group. >> >> To better explain the design, this was an intervention with three time >> points (1 pre-test and 2 post-tests). There was one experimental group and >> two control groups. There was a significant main effect of time (kids in >> all groups increased in the skill assessed over time, which is expected), >> but that is not is not the central research question. I am interested in >> the TimeXGroup interaction, and plotted means show that the experimental >> group appeared to gain more dramatically in the skill assessed compared to >> two control groups. The pairwise comparisons following the significant >> interaction showed that the experimental group did better than control >> group 1 at Time 2 (post-test 1; p = .03) and at Time 3 (post-test 2; p = >> .047). However, the univariate tests are not significant at Time 2 (p = >> .067) or Time 3 (p = .14). Perhaps this is an issue of power as Rich >> suggests? My question is how to report these results. What exactly is >> reported for the pairwise comparisons since there is no statistic in the >> output? What do you do when there are significant pairwise comparisons but >> not a significant Univariate Test? >> >> >> Thanks! >> Virginia >> >> >> On Wed, Jun 4, 2014 at 12:23 PM, Rich Ulrich [via SPSSX Discussion] >> <[hidden email]> wrote: >> >> >> >> >> >> >> More detail would help, for seeing entirely what you have done, but the >> questions >> at the end are not too tough. >> >> In reverse order: >> (Explaining...) Plot your means. Is there a simple description? If not, >> you might >> >> economically "explain these findings" as Type II error. >> >> (Univariate...) Univariate tests on 3 groups do not have to show 2-group >> effects >> as "significant" when there were barely differences on 2 groups tested >> alone. >> >> The extra power from having fewer d.f. being tested is one reason why >> designs >> with two groups are inherently superior to designs with three groups. >> >> (interaction) The interaction is tested using within-subject variation, >> which your >> >> followup tests do not. Trends across time are easier to test and report >> when you >> can use the linear trend component. If you do not expect "linear" as the >> nature >> of your change, perhaps the design is wrong: you could use Baseline as a >> covariate, >> >> or else test primarily Base versus Other. Or, if you do not expect change >> across >> time, is there some other reason to be interested in random effects seen >> when >> doing a lot of tests? >> >> -- >> Rich Ulrich >> >> >>> Date: Tue, 3 Jun 2014 16:29:27 -0700 >>> From: [hidden email] >> >>> Subject: Simple Main Effects Pairwise Comparisons vs Univariate Tests >>> To: [hidden email] >> >>> >>> I have an experimental design with Time as the within-subjects factor (3 >>> levels) and Group as the between-subjects factor (3 levels). There is a >>> significant interaction. When examining the simple main effects, I find a >> >>> significant difference between two of the groups at Time 2 and Time 3. My >>> question is why the Univariate Tests do not show a significant effect at >>> Time 2 or Time 3. What do I report and how do I explain these findings? >> >>> Thanks! >>> >>> >> >> >> >> >> >> >> >> >> >> >> >> If you reply to this email, your message will be added to >> the discussion below: >> >> http://spssx-discussion.1045642.n5.nabble.com/Simple-Main-Effects-Pairwise-Comparisons-vs-Univariate-Tests-tp5726323p5726331.html >> >> >> >> To unsubscribe from Simple Main Effects Pairwise >> Comparisons vs Univariate Tests, click here. >> >> >> NAML >> >> >> >> >> >> >> >> >> >> View this message in context: Re: Simple Main Effects Pairwise Comparisons >> vs Univariate Tests >> >> Sent from the SPSSX Discussion mailing list archive at Nabble.com. > > > > > > ----- > -- > Bruce Weaver > [hidden email] > http://sites.google.com/a/lakeheadu.ca/bweaver/ > > "When all else fails, RTFM." > > NOTE: My Hotmail account is not monitored regularly. > To send me an e-mail, please use the address shown above. > > -- > View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Simple-Main-Effects-Pairwise-Comparisons-vs-Univariate-Tests-tp5726323p5726355.html > Sent from the SPSSX Discussion mailing list archive at Nabble.com. > > ===================== > To manage your subscription to SPSSX-L, send a message to > [hidden email] (not to SPSSX-L), with no body text except the > command. To leave the list, send the command > SIGNOFF SPSSX-L > For a list of commands to manage subscriptions, send the command > INFO REFCARD ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
In reply to this post by Bruce Weaver
"I agree with Rich that Pre should be treated as a covariate"
As another look at the data especially if you did not build in sufficient power. Before you started what did you decide what would be a minimum meaningful difference in changes? Did you do a power analysis? It may be that results will be statistically significant treating the pre-test as a covariate but not as repeated measures. If so, use another grain of salt. However, the visualization you include in your report would be more complicated. Try to work out an idealized visualizations of the models before you run those models. You can create data sets with 9 means that show meaningfully different.that have the patterns you expect. it is. You would be looking to see if there is a difference in changes, i.e., a non parallelism or interaction effect is what you would be hoping for. These will be an aid to set context when you eyeball the obtained results Without knowing your complete design, an idealized visualization would have the pretest point all the same, with the two control groups parallel to each other. The argument you can make about your results whether you end up reporting a repeated measure or covariance approach very much depends on whether the comparison groups are "control" groups. In either the repeated measures model or the covariate model, to avoid irrelevant multiple tests, ONLY test the specific interactions effect(s) that you would be hypothesizing. I.e, the two comparison/control groups have parallel profiles. The profile for the treatment group would NOT be parallel to the (separate or pooled depending on your Hypothesis) profile(s). Recall that statistically significant results may not be large enough to be meaningful. HTH
Art Kendall
Social Research Consultants |
Administrator
|
In reply to this post by Ian Martin-2
Good morning Ian. You raise a good point about homogeneity of regression.
In the dialect of statistical terminology I learned, ANCOVA refers to a model that does not include group x covariate interactions, and so the model forces the lines to be parallel. To determine whether that is a sensible model, one could/should look at a scatterplot with X = covariate, Y = outcome variable and separate regression lines by group (with the slopes free to vary). If the lines in that plot are pretty close to parallel, ANCOVA makes sense. If the lines are not pretty close to parallel, one probably wants a model that includes group x covariate terms, because they allow the slope to vary by group. But as noted earlier, I would no longer call that model ANCOVA. Furthermore, with group x covariate products included, the groups would have to be compared at more than one value of the covariate, because differences among the groups would depend on the value of the covariate. If you are using GLM or UNIANOVA, one convenient way to do this is via multiple EMMEANS sub-commands where you choose different values of the covariate. E.g., something like this: /EMMEANS=TABLES(Group) WITH(Covariate=x) COMPARE /EMMEANS=TABLES(Group) WITH(Covariate=y) COMPARE /EMMEANS=TABLES(Group) WITH(Covariate=z) COMPARE where x, y and z are three selected values of the covariate (probably low, medium and high values). You can also tack on ADJ(LSD), ADJ(BONFERRONI) or ADJ(SIDAK) if you want to adjust for multiple comparisons. Cheers, Bruce
--
Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." PLEASE NOTE THE FOLLOWING: 1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. 2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/). |
In reply to this post by Ian Martin-2
Assumptions: What is assumed for ANCOVA, along with homogeneity of slopes,
is the homogeneity of initial means. Neither is apt to be a problem for designed studies where the group membership was randomized. I thought about tossing in a comment about the need for the Pre means to be equal (or you can't do valid inference), but I didn't give a thought to the slopes. It is not that the assumption does not matter: but I have seldom seen it violated, and I have never seen it violated with equal means for Pre, and Pre as covariate -- except, maybe, if that was an expected part of a weird outcome. If the slopes are not readily *assumed* to be equal, then I sort of expect that the "unequal slopes" might be a major aspect of whatever is being concluded. - Maybe I should leave further commentary on actually using "unequal slopes" to people who have somehow found them useful, instead of a symptom to be cured. I can say more about the bad symptom. When means of the covariate are unequal, there are usually other problems. Covariance is problematic when the groups are Observational, especially when they are created by the high/low dichotomy of some variable correlated with the covariate. It is problematic for inference even when there are *no* scaling problems on the measures, such as basement/ ceiling effects. (Combining those problems will often create "unequal slopes".) It is easy to make mistakes with unequal Pre, even with homogeneous regressions. It is not "fair", for instance, to conclude that the high-scoring group "did worse" when the plots show that high and low groups both merely "regressed toward the mean" as any statistician should expect. -- Rich Ulrich > Date: Thu, 5 Jun 2014 09:08:30 -0400 > From: [hidden email] > Subject: Re: Simple Main Effects Pairwise Comparisons vs Univariate Tests > To: [hidden email] > > Has somebody declared there is no longer any need to satisfy the homogeneity of slopes assumption before ANCOVA? I keep seeing presentations and publications where everything is “adjusted” for covariates like “pre” or “age” and there is no mention of the test to see if group slopes are NSD. How can one adjust means from a common regression line when subgroups may have completely opposing slopes? > > I did check with a former stats consultant faculty at U. Waterloo — now retired, and she was rather scathing about what she saw as cavalier application of ANCOVA for “adjustment” of group means. I have even seen one solution for situations where slopes are sig. different (in another forum): “just don’t call it ANCOVA” ! > > I’d be interested in feedback, since some colleagues stuck my name on a ms and were surprised when I queried them about the pre-condition for ANCOVA. Am I olde fashioned or misinformed? > > Ian Martin > Ian D. Martin, Ph.D. |
There are several assumptions for the traditional ANCOVA, some of which that immediately come to mind include: (1) homogeneity of regression slopes (2) independence of IV and covariate (3) homogeneity of adjusted population variances (4) linear relationship between covariate and DV (5) Covariate is fixed and measured without error (6) independence of observations (7) normality Arguably, some are more important than others. Violations of key ANCOVA assumptions tend to arise when one is dealing with intact / nonrandomized groups, as discussed by another poster. Weren't there more than 2 time points? I would move over to a linear mixed model which is far more flexible (eg, does not require: (a) sphericity/CS residual covariance, (b) deletion of cases listwise due to missing response data, and the list goes on and on). Ryan Sent from my iPhone
|
Thank you all for your helpful feedback. I think the discussion has moved a bit away from what my original question was, but to answer some of your questions: Yes, the design was a true experiment--participants were randomly assigned and do not differ significantly at pre-test. Yes, I graphed the 9 means and the trend is as hypothesized--groups do not differ at pre-test and the experimental group appears to have a more dramatic increase at post-tests 1 and 2. The control groups do not appear to be entirely parallel--one group was a treated control group and so it was expected that there may be some gain over time in that group relative to a control group who did not receive any interaction during the intervention phase. I'm not opposed to the pre-test as a covariate. I only wonder how the results would be received. In virtually every publication on this topic I see ANOVAs presented with the interaction results followed by the pairwise comparisons. On Fri, Jun 6, 2014 at 6:37 AM, GMAIL [via SPSSX Discussion] <[hidden email]> wrote:
|
I very often tell people to use the tests that they see in their literature.
But if the literature is screwed up -- like, sticking the Baseline into the repeated measures, when you won't be looking at linear trend -- I think I would be able to mention that the literature often does this wrong, so that they end up trying to untangle Interaction effects instead of looking at simpler Main effects. Test what you are interested in, and report it. You might not feel comfortable in ignoring that literature (or just ignoring the issue) unless you can convince yourself that the more awkward approach is truly inferior. I don't really follow which tests are being cited in the final paragraph. "... with different p values" - If the two periods are not at all different in means, the mean for both periods is what deserves reporting, not the individual means and contrasts. However, if you were saying that the p-values were different from what the previous RM gave you, well, that is expected. The covariate has reduced the size of the error term for tests of between groups. I don't know how you want to handle the ANOVA situation with 3 groups, with respect to testing Experimental versus the others. Rerun analyses with two? I would want to consider the a-priori status of the two controls: whatever was expected. Is the overall test for 3 groups necessary at all, or could it have been omitted? Or, Does it justify the further testing? -- Rich Ulrich Date: Fri, 6 Jun 2014 12:39:36 -0700 From: [hidden email] Subject: Re: Simple Main Effects Pairwise Comparisons vs Univariate Tests To: [hidden email] Thank you all for your helpful feedback. I think the discussion has moved a bit away from what my original question was, but to answer some of your questions: Yes, the design was a true experiment--participants were randomly assigned and do not differ significantly at pre-test. Yes, I graphed the 9 means and the trend is as hypothesized--groups do not differ at pre-test and the experimental group appears to have a more dramatic increase at post-tests 1 and 2. The control groups do not appear to be entirely parallel--one group was a treated control group and so it was expected that there may be some gain over time in that group relative to a control group who did not receive any interaction during the intervention phase. I'm not opposed to the pre-test as a covariate. I only wonder how the results would be received. In virtually every publication on this topic I see ANOVAs presented with the interaction results followed by the pairwise comparisons. |
Dear Rich, Thanks again for your feedback. That makes sense regarding reporting just the main effects at the post-tests controlling for pre-test. For the SPSS output, I was referring to the difference in the table reporting the pairwise comparisons, which provides the mean difference, p-value, and confidence intervals for each pairwise comparison at each of the two times and the table immediately following that in the SPSS output, which provides the F tests for each time and p-values. The p-values differ because the latter is the main effect with all three groups, not just two. My question is simply which to report. I've seen the literature report one or the other, but not both--it seems odd to report just the F test and then say which comparisons were significant without reporting those p-values too, which differ at least in my sample. Is there a preferred way to report these?
Thanks, Virginia On Sat, Jun 7, 2014 at 1:41 AM, Rich Ulrich [via SPSSX Discussion] <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |