There is a ton of literature on the subject of analyzing change. Unfortunately,
much of it is not very well informed. On a quick Google, I found that the first
answer at the URL below (the Reply referencing Senn) gives a pretty good overview,
plus references. (The Reply-er endorses using ANCOVA for controlled studies.)
http://stats.stackexchange.com/questions/3466/best-practice-when-analysing-pre-post-treatment-control-designsI will add:
On assumptions: It is nice to have (1) equal variances everywhere, (2) equal means at Pre, and
(3) a shared regression line. Given those, it is hard to fault the ANCOVA. In my experience,
"unequal variances" are sometimes fixed by suitable transformation of the criterion. But the
failure of the assumptions is why (at least, in my off-hand thoughts here) you might want to
analyze the Outcome while ignoring Pre, or analyze the simple change score. Unequal-at-Pre
raises serious logical conundrums, at times, and regression-not-to-the-shared-mean on top of
unequal-at-Pre puts you into statistical complication, and controversy. The latter was the case
of analyzing long-term outcome for Headstart vs. other students -- where Expected outcome with
no intervention, according to other experience, would be that the lower-achieving target cases
should fall further and further behind.
--
Rich Ulrich
--- snip ---