Hi there listers,
I am having the following dilemma. A repeated-measures design (with stable control and experimental group) was used six times over the course of 18 months for institutionalised individuals; time and group are the independent variables. The data are continuous, dichotomous or counts of occurrences (some sums of Likert scales). Due to external factors there was a break between t3 and t4 however, which was deemed to be too large to ignore. So I was instructed to analyse only the 2 cycles of data collection (3 times each) separately for the time being. However much of the effort and potential, of the study would be for naught if no information on the whole progress could be ascertained. What would be a reasonable way to do this (while being sensible to the fact, that any effect between cycles could just as well be design bias)? Just comparing cycle1 and 2 means seems oversimplifying it and lacking. I researched trying to find something on comparing paired score changes (t1-t0&t2-t1 vs. t4-t3&t5-t4) but I could not come up with something feasible yet. Any suggestions (or pointers towards literature if any come to mind) are highly appreciated. |
If you have an overall hypothesis that, for instance, Treatment is GOOD,
you should figure what single variable best measures that; or figure how to create a composite score from the several best measures. If you have two or three hypotheses, select a score or create a composite for each. A proper question is, What effect did you expect over time? Without a whole lot of bias, you can ask, Is there any change in the combined groups across time? Is there any evidence of an "interruption" at the time of the gap between t3 and t4? For the sake of discussing any design: Do one or more of the periods stand as "baseline" which should not show any difference between groups? What are the Ns? How many Periods or scores are Missing? In order to decide whether the whole effort is going to be a waste of time, you might start by looking the two group means on important variables, across time. Is there *any* apparent effect of Treatment (experimental condition)? - Okay, what does it look like? Is that what you expected at the start? -- Rich Ulrich > Date: Fri, 6 Jun 2014 09:49:02 -0700 > From: [hidden email] > Subject: Score changes in repeated measure designs > To: [hidden email] > > Hi there listers, > > I am having the following /dilemma/. A repeated-measures design (with stable > control and experimental group) was used six times over the course of 18 > months for institutionalised individuals; time and group are the independent > variables. The data are continuous, dichotomous or counts of occurrences > (some sums of Likert scales). Due to external factors there was a break > between t3 and t4 however, which was deemed to be too large to ignore. So I > was instructed to analyse only the 2 cycles of data collection (3 times > each) separately for the time being. > > However much of the effort and potential, of the study would be for naught > if no information on the whole progress could be ascertained. What would be > a reasonable way to do this (while being sensible to the fact, that any > effect between cycles could just as well be design bias)? Just comparing > cycle1 and 2 means seems oversimplifying it and lacking. I researched trying > to find something on comparing paired score changes (t1-t0&t2-t1 vs. > t4-t3&t5-t4) but I could not come up with something feasible yet. > > Any suggestions (or pointers towards literature if any come to mind) are > highly appreciated. > |
Free forum by Nabble | Edit this page |