Score changes in repeated measure designs
Posted by statcat on Jun 06, 2014; 4:49pm
URL: http://spssx-discussion.165.s1.nabble.com/Score-changes-in-repeated-measure-designs-tp5726375.html
Hi there listers,
I am having the following dilemma. A repeated-measures design (with stable control and experimental group) was used six times over the course of 18 months for institutionalised individuals; time and group are the independent variables. The data are continuous, dichotomous or counts of occurrences (some sums of Likert scales). Due to external factors there was a break between t3 and t4 however, which was deemed to be too large to ignore. So I was instructed to analyse only the 2 cycles of data collection (3 times each) separately for the time being.
However much of the effort and potential, of the study would be for naught if no information on the whole progress could be ascertained. What would be a reasonable way to do this (while being sensible to the fact, that any effect between cycles could just as well be design bias)? Just comparing cycle1 and 2 means seems oversimplifying it and lacking. I researched trying to find something on comparing paired score changes (t1-t0&t2-t1 vs. t4-t3&t5-t4) but I could not come up with something feasible yet.
Any suggestions (or pointers towards literature if any come to mind) are highly appreciated.