I am in the process of proposing a feasibility study (too small to be a "real" intervention study) examining the effects of an intervention. We will be comparing this intervention to another one, and also want to compare it to a waiting list control. Now, here is the problem: the sample I have access to is quite small. 2 classes of 20 participants each will be run for the target intervention condition. Our initial plan was to run one group of 20 as the intervention group and the other would be the waiting-list control (otherwise ethical issues arise). We were also going to run another 40 people in the comparison intervention group (20 in the intervention and 20 in the waiting list). However, the power of the study given the effect sizes we would expect (on the order of .5) is quite small. In discussion with my committee, we have come up with a few possible options, but I'm not sure what the most feasible and best option would be, and I'm not sure where to start looking (not sure if they've been done before?). Here are the things we were thinking to try to increase the power:
1. measure each participant at 3 time points, and provide the intervention between time points 2 and 3, thus each person could serve as their own control. With this within-subjects/nested design, the power is surely better but the issue of practice effects with the measures we are using is more of a concern. 2. is there some way to use the waiting list controls as participants in the intervention as well (e.g., the waiting list that eventually receives the target intervention could serve as the waiting list (for analyses) of the comparison intervention and also in the "intervention group" for the target intervention? Has this been done? It seems like it would increase my power significantly, but I am not sure it will work as well as I think it might. What am I missing? I would appreciate any and all feedback about what my best options are for data analysis, and some resources (books or articles) that maybe have done similar analyses? Thank you so much! Becca Bubb |
Hi Becca,
I think it's a mistake to call your study 'to small to be a "real" intervention study'. To do so dismisses its value. I'm a little confused about the sample and design. It sounds like you have two intervention conditions and control conditions. A true no-treatment control (NTC) group is not possible; thus, a wait-list control. But, random assignment to condition? One one hand, your N would be 20 per group X 3 groups = 60. But adding up the numbers suggests 80. I don't quite get what purpose of the study is. What i mean is this. You have a NTC group and two intervention groups, one of which is seems to be labeled 'comparison'. If so, that makes me think that the comparison intervention is the alreay-used intervention. What is your main hypothesis? Is it 'test intervention > comparison intervention' or something else? I'd like to hold off on commenting on your questions until I understand the hypothesis/design. Gene Maguin -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of rbubb Sent: Tuesday, June 21, 2011 4:35 PM To: [hidden email] Subject: Nested models and waiting list controls I am in the process of proposing a feasibility study (too small to be a "real" intervention study) examining the effects of an intervention. We will be comparing this intervention to another one, and also want to compare it to a waiting list control. Now, here is the problem: the sample I have access to is quite small. 2 classes of 20 participants each will be run for the target intervention condition. Our initial plan was to run one group of 20 as the intervention group and the other would be the waiting-list control (otherwise ethical issues arise). We were also going to run another 40 people in the comparison intervention group (20 in the intervention and 20 in the waiting list). However, the power of the study given the effect sizes we would expect (on the order of .5) is quite small. In discussion with my committee, we have come up with a few possible options, but I'm not sure what the most feasible and best option would be, and I'm not sure where to start looking (not sure if they've been done before?). Here are the things we were thinking to try to increase the power: 1. measure each participant at 3 time points, and provide the intervention between time points 2 and 3, thus each person could serve as their own control. With this within-subjects/nested design, the power is surely better but the issue of practice effects with the measures we are using is more of a concern. 2. is there some way to use the waiting list controls as participants in the intervention as well (e.g., the waiting list that eventually receives the target intervention could serve as the waiting list (for analyses) of the comparison intervention and also in the "intervention group" for the target intervention? Has this been done? It seems like it would increase my power significantly, but I am not sure it will work as well as I think it might. What am I missing? I would appreciate any and all feedback about what my best options are for data analysis, and some resources (books or articles) that maybe have done similar analyses? Thank you so much! Becca Bubb -- View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Nested-models-and-waiting-list -controls-tp4511905p4511905.html Sent from the SPSSX Discussion mailing list archive at Nabble.com. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
Hi Gene,
Thanks for your response and questions. I will give you more information on the study. I am thinking about it currently as a feasibility study just because I'm not confident that with the effect sizes we are expecting, we will have enough power to detect differences. I am doing this for my dissertation and my committee has commented that they think it would need to be bigger to be a full-scale intervention study. I don't mean to dismiss it's value, but I also don't want to overplay it, as we may not find significant effects, especially if I cannot find a good way to boost the power. Here is what we are planning: an MBSR intervention is the focus of the study, DV is different measures of cognition. We are going to teach the class in a retirement facility. Unfortunately, we cannot teach the "alternative intervention" (intended to be considered an active control- a general cognitive training course) in the same facility due to recruitment issues. So we are planning on doing each class with an intervention group and a waiting list control (which then makes 80, 40 in each facility). We are using two facilities that we expect will have similar demographics in the hope that will not be a confound in the data. That also means we will not have random assignment to which intervention, but there will be random assignment to WLC or intervention within each facility. My main hypothesis is that the MBSR will have a differential effect on cognitive abilities such as attention and working memory (e.g., will improve attention more than the active control), but not on more crystallized cognitive abilities such as verbal ability (for which the active control may do better). I hope this information makes my study more clear and you have suggestions or ideas for me to explore further. The design is not fully in place yet, so one of the questions is if there is a design, given this number of participants, that would provide more power to the study, I'm all ears! Thank you for your interest in helping me, I very much appreciate it!! -Becca On Wed, Jun 22, 2011 at 10:11 AM, Gene Maguin [via SPSSX Discussion] <[hidden email]> wrote: Hi Becca, |
You have a repeated measures design. This design automatically controls for individual differences. Which procedure are you using? Dr. Paul R. Swank, Professor Children's Learning Institute University of Texas Health Science Center-Houston From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of rbubb Hi Gene, On Wed, Jun 22, 2011 at 10:11 AM, Gene Maguin [via SPSSX Discussion] <[hidden email]> wrote: Hi Becca, If you reply to this email, your message will be added to the discussion below: To unsubscribe from Nested models and waiting list controls, click here. View this message in context: Re: Nested models and waiting list controls |
In reply to this post by rbubb
Becca,
I'd urge you to consider a three assessment point measurement
structure. People are recruited and randomly assigned to Immediate Treatment
(IT) or Delayed Treatment (DT). That is assessment 1, call it a pretest. The IT
group receives the intervention; the DT group does not. At the completion of
treatment, everybody, both groups, is assessed (assessment 2). The DT group
then receives the intervention; the IT group is in followup. At the conclusion
of the DT group intervention, everybody, both groups, is assessed
(assessment 3).
For DT(A1-A2) you get a measure of stability from practice,
from aging, from etc.
From IT(A1-A2) versus DT(A2-A3) you get the intervention
effect.
From DT(A2-A3) you get a one group test of intervention
persistance.
There's certainly ways that this sort of thing can blow up and
it's important to thing of them and do something about them. One concern is
measure or assessment practice or familiarity. Score change only because people
have seen the measure(s) more than once.
In
order to control for between facility differences, i think you have only two
choices: matching or covariate, i.e., controls, assessment. I think your group
is too small for propensity score matching but others on the list have more
experience with this.
Gene
Maguin
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of rbubb Sent: Thursday, June 23, 2011 12:53 PM To: [hidden email] Subject: Re: Nested models and waiting list controls Thanks for your response and questions. I will give you more information on the study. I am thinking about it currently as a feasibility study just because I'm not confident that with the effect sizes we are expecting, we will have enough power to detect differences. I am doing this for my dissertation and my committee has commented that they think it would need to be bigger to be a full-scale intervention study. I don't mean to dismiss it's value, but I also don't want to overplay it, as we may not find significant effects, especially if I cannot find a good way to boost the power. Here is what we are planning: an MBSR intervention is the focus of the study, DV is different measures of cognition. We are going to teach the class in a retirement facility. Unfortunately, we cannot teach the "alternative intervention" (intended to be considered an active control- a general cognitive training course) in the same facility due to recruitment issues. So we are planning on doing each class with an intervention group and a waiting list control (which then makes 80, 40 in each facility). We are using two facilities that we expect will have similar demographics in the hope that will not be a confound in the data. That also means we will not have random assignment to which intervention, but there will be random assignment to WLC or intervention within each facility. My main hypothesis is that the MBSR will have a differential effect on cognitive abilities such as attention and working memory (e.g., will improve attention more than the active control), but not on more crystallized cognitive abilities such as verbal ability (for which the active control may do better). I hope this information makes my study more clear and you have suggestions or ideas for me to explore further. The design is not fully in place yet, so one of the questions is if there is a design, given this number of participants, that would provide more power to the study, I'm all ears! Thank you for your interest in helping me, I very much appreciate it!! -Becca On Wed, Jun 22, 2011 at 10:11 AM, Gene Maguin [via SPSSX
Discussion] <[hidden email]> wrote: Hi Becca, View this message in context: Re: Nested models and waiting list controls Sent from the SPSSX Discussion mailing list archive at Nabble.com. |
In reply to this post by Swank, Paul R
Well I haven't started it yet (or proposed it yet) so I wanted feedback on which procedure would provide the most power to the study. Either would be feasible.
On Thu, Jun 23, 2011 at 2:20 PM, Swank, Paul R [via SPSSX Discussion] <[hidden email]> wrote:
|
In reply to this post by Maguin, Eugene
Gene,
Thank you for this information. This seems feasible and it makes sense with what we are planning. Can you perhaps point me toward other studies that use this, or a person who has published who is a proponent of this design? Thank you! Becca On Thu, Jun 23, 2011 at 4:10 PM, Gene Maguin [via SPSSX Discussion] <[hidden email]> wrote: |
Free forum by Nabble | Edit this page |