Pre/Post Survey Problems

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Pre/Post Survey Problems

Kara D. Larkan-Skinner

I'm in a field that primarily uses applied research and am in a predicament. My office has been asked to analyze a pre and post survey.  (The following numbers are roughly close estimates.)  The pre-test only captured 40% of the population (N=100).  The post-test only captured 20% of the population (N=50) and the pre/post that only captured both groups was around 8% with N=22. The survey is in sections or themes with approximately 50 total questions.  With 10 questions per theme.  There were many survey distribution errors which led to the response rates above.

 

What is the best method for analyzing this data? Or is this data too invalid to make any reasonable assumptions (even with the use of statistical testing)? I am concerned with the extremely high attrition rates.  Also, it concerns me to throw out all of the unmatched results.  I understand that it is common in order to attribute a "value added" effect, but doesn't' throwing out 100+ survey responses seem to skew results in and of itself?  I also want to be very cautious with interpretation of an invalid instrument, because often my cautions about reliability and validity fall on deaf ears but the results spread like wildfire.

 

I'm interested in hearing your thoughts and opinions on this.

 

Thanks

KLS

 

Reply | Threaded
Open this post in threaded view
|

Automatic reply: Pre/Post Survey Problems

Valerie Villella
I am out of office. I will be returning to work on Monday September 24. Have a good weekend.

Valerie Villella
Education Coordinator &
Policy and Program Analyst

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Pre/Post Survey Problems

Rich Ulrich
In reply to this post by Kara D. Larkan-Skinner
You are right to be cautious about the selection/attrition bias.

The first thing to do is to compare "Matched" versus "Unmatched"
for each time period.  This estimates a lower limit on the size of
the selection bias.  You can't say how different your sample is from
the people who were *never* sampled, but you can see if there is
a difference between sampled-once and sampled-twice.  If there is
a definite difference, then you have strong reason to warn against
extrapolating to the never-sampled.

The information that is paired, Pre-Post, is going to be a good
estimate of change *if* the correlation between pre and post is
very high (and positive).  However, you have a separate estimate
of change in the non-matched data.  - if there was *no*  attrition/
selection bias, then you have two estimates that it might be fair
to combine, the paired t-test and a pooled t-test on the rest.   - if
there was definite attrition bias, you should probably state the
conclusions separately, at least as the first step.

There is a fancy way to incorporate all the scores into one linear
model, which I would consider suitable only as part of a final, pedantic
summary statement, after finding out that there is nothing tricky
or unexpected in the way the simple tests come out.

--
Rich Ulrich


Date: Fri, 21 Sep 2012 16:17:53 +0000
From: [hidden email]
Subject: Pre/Post Survey Problems
To: [hidden email]

I'm in a field that primarily uses applied research and am in a predicament. My office has been asked to analyze a pre and post survey.  (The following numbers are roughly close estimates.)  The pre-test only captured 40% of the population (N=100).  The post-test only captured 20% of the population (N=50) and the pre/post that only captured both groups was around 8% with N=22. The survey is in sections or themes with approximately 50 total questions.  With 10 questions per theme.  There were many survey distribution errors which led to the response rates above.
 
What is the best method for analyzing this data? Or is this data too invalid to make any reasonable assumptions (even with the use of statistical testing)? I am concerned with the extremely high attrition rates.  Also, it concerns me to throw out all of the unmatched results.  I understand that it is common in order to attribute a "value added" effect, but doesn't' throwing out 100+ survey responses seem to skew results in and of itself?  I also want to be very cautious with interpretation of an invalid instrument, because often my cautions about reliability and validity fall on deaf ears but the results spread like wildfire.
 
I'm interested in hearing your thoughts and opinions on this.
 
Thanks
KLS
 
Reply | Threaded
Open this post in threaded view
|

Re: Pre/Post Survey Problems

Melissa Ives
In reply to this post by Kara D. Larkan-Skinner

Check the 100 vs the 22 at intake—are they different on basic characteristics?

 

From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Kara D. Larkan-Skinner
Sent: Friday, September 21, 2012 11:18 AM
To: [hidden email]
Subject: [SPSSX-L] Pre/Post Survey Problems

 

I'm in a field that primarily uses applied research and am in a predicament. My office has been asked to analyze a pre and post survey.  (The following numbers are roughly close estimates.)  The pre-test only captured 40% of the population (N=100).  The post-test only captured 20% of the population (N=50) and the pre/post that only captured both groups was around 8% with N=22. The survey is in sections or themes with approximately 50 total questions.  With 10 questions per theme.  There were many survey distribution errors which led to the response rates above.

 

What is the best method for analyzing this data? Or is this data too invalid to make any reasonable assumptions (even with the use of statistical testing)? I am concerned with the extremely high attrition rates.  Also, it concerns me to throw out all of the unmatched results.  I understand that it is common in order to attribute a "value added" effect, but doesn't' throwing out 100+ survey responses seem to skew results in and of itself?  I also want to be very cautious with interpretation of an invalid instrument, because often my cautions about reliability and validity fall on deaf ears but the results spread like wildfire.

 

I'm interested in hearing your thoughts and opinions on this.

 

Thanks

KLS

 



PRIVILEGED AND CONFIDENTIAL INFORMATION
This transmittal and any attachments may contain PRIVILEGED AND
CONFIDENTIAL information and is intended only for the use of the
addressee. If you are not the designated recipient, or an employee
or agent authorized to deliver such transmittals to the designated
recipient, you are hereby notified that any dissemination,
copying or publication of this transmittal is strictly prohibited. If
you have received this transmittal in error, please notify us
immediately by replying to the sender and delete this copy from your
system. You may also call us at (309) 827-6026 for assistance.