Respondent fatigue

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Respondent fatigue

Bob Schacht-3
We have a large survey that we are piloting ("large" means "more than 100
questions"). Recognizing the possibility of respondent fatigue, we did not
want to leave the same set of questions at the end of the survey each time,
because that would mean that the last set of questions might be more
affected by respondent fatigue than other sections.

There were 6 groups of questions, so we grouped them as follows:

Group A = Question Sets 1 & 2
Group B = Question Sets 3 & 4
Group C = Question Sets 5 & 6

We printed up the questionnaires so that
33% were ordered A, B, C
33% were ordered B, C, A
33% were ordered C, A, B

Responses to all questions were on a 5-point likert scale.

In other words, every respondent got the same questions; they just got them
in a different order.
Let us assume that the respondents were assigned to one of the three groups
at random.
The respondent pool (N=50), so far, is not large enough to be conclusive,
but it might be nice to include some measure of respondent fatigue.

What would respondent fatigue look like?
    * Less variability in responses in the last question set
    * More incomplete (blank) responses in the last question set
    * [other?]

What is the best way to test this data for respondent fatigue?

Thanks,
Bob

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Respondent fatigue

Lemon, John S.
Bob et al.

I am also very concerned with respondent fatigue and the way that I tested it was simpler than yours in that I split the survey in two and presented each half first. This was a web based survey and the software allows partial completion so I knew when people gave up. My analysis method would then make a statistician flinch but what I did was to keep the returns in two separate files and run a simple descriptives analysis, noting when the number of valid responses dropped off to see whether it was at a similar point in the different versions of the survey. Interestingly there were some similarities in the point where people 'gave up' but as it was a simple project and approach I couldn't draw any conclusions from it apart from the fact that there is 'respondent fatigue' - added to this was the fact that the target group had had a number of surveys before this one !!

Best Wishes

John S. Lemon
DIT ( Directorate of Information Technology ) - Student Liaison Officer
University of Aberdeen
Edward Wright Building: Room G51

Tel:  +44 1224 273350
Fax: +44 1224 273372

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Bob Schacht
Sent: 19 November 2008 19:50
To: [hidden email]
Subject: Respondent fatigue

We have a large survey that we are piloting ("large" means "more than 100
questions"). Recognizing the possibility of respondent fatigue, we did not
want to leave the same set of questions at the end of the survey each time,
because that would mean that the last set of questions might be more
affected by respondent fatigue than other sections.

There were 6 groups of questions, so we grouped them as follows:

Group A = Question Sets 1 & 2
Group B = Question Sets 3 & 4
Group C = Question Sets 5 & 6

We printed up the questionnaires so that
33% were ordered A, B, C
33% were ordered B, C, A
33% were ordered C, A, B

Responses to all questions were on a 5-point likert scale.

In other words, every respondent got the same questions; they just got them
in a different order.
Let us assume that the respondents were assigned to one of the three groups
at random.
The respondent pool (N=50), so far, is not large enough to be conclusive,
but it might be nice to include some measure of respondent fatigue.

What would respondent fatigue look like?
    * Less variability in responses in the last question set
    * More incomplete (blank) responses in the last question set
    * [other?]

What is the best way to test this data for respondent fatigue?

Thanks,
Bob

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD


The University of Aberdeen is a charity registered in Scotland, No SC013683.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Respondent fatigue

Brian Moore-7
Hi Bob-

there was actually an article in Nov 08 issue of Quirk's Marketing
Research Review called "the survey killer" on this topic.
covered some things that don't seem to apply in your case (like question
type most commonly triggering early termination), but here's a few
possibilities from their findings of 550 surveys they had administered
(some designed specifically to get at this, but most apparently not).

In addition to straight lining and early termination they mentioned
reduction in extreme values / change in "character" of answers (neutral
answers increase 18% & both extremes decrease 20-25%)and time spent per
question <decreasing is a sight of fatigue as well>.

also notable were a 41% decline in word count for a split test of open
responses early and late in an otherwise similar survey and fewer items
checked in multiresponse lists <not split tested>.

the one other thing I can add from other sources is that including some
sort of verifying answers (please answer 3 for this question or year
born at one place in the survey and age somewhere else) is a worthwhile
practice that identifies low concentration /interest.

Regards,
Brian

Brian Moore
Market Research Manager
WorldatWork






-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Lemon, John S.
Sent: Wednesday, November 19, 2008 1:01 PM
To: [hidden email]
Subject: Re: Respondent fatigue

Bob et al.

I am also very concerned with respondent fatigue and the way that I
tested it was simpler than yours in that I split the survey in two and
presented each half first. This was a web based survey and the software
allows partial completion so I knew when people gave up. My analysis
method would then make a statistician flinch but what I did was to keep
the returns in two separate files and run a simple descriptives
analysis, noting when the number of valid responses dropped off to see
whether it was at a similar point in the different versions of the
survey. Interestingly there were some similarities in the point where
people 'gave up' but as it was a simple project and approach I couldn't
draw any conclusions from it apart from the fact that there is
'respondent fatigue' - added to this was the fact that the target group
had had a number of surveys before this one !!

Best Wishes

John S. Lemon
DIT ( Directorate of Information Technology ) - Student Liaison Officer
University of Aberdeen Edward Wright Building: Room G51

Tel:  +44 1224 273350
Fax: +44 1224 273372

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Bob Schacht
Sent: 19 November 2008 19:50
To: [hidden email]
Subject: Respondent fatigue

We have a large survey that we are piloting ("large" means "more than
100 questions"). Recognizing the possibility of respondent fatigue, we
did not want to leave the same set of questions at the end of the survey
each time, because that would mean that the last set of questions might
be more affected by respondent fatigue than other sections.

There were 6 groups of questions, so we grouped them as follows:

Group A = Question Sets 1 & 2
Group B = Question Sets 3 & 4
Group C = Question Sets 5 & 6

We printed up the questionnaires so that 33% were ordered A, B, C 33%
were ordered B, C, A 33% were ordered C, A, B

Responses to all questions were on a 5-point likert scale.

In other words, every respondent got the same questions; they just got
them in a different order.
Let us assume that the respondents were assigned to one of the three
groups at random.
The respondent pool (N=50), so far, is not large enough to be
conclusive, but it might be nice to include some measure of respondent
fatigue.

What would respondent fatigue look like?
    * Less variability in responses in the last question set
    * More incomplete (blank) responses in the last question set
    * [other?]

What is the best way to test this data for respondent fatigue?

Thanks,
Bob

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command SIGNOFF SPSSX-L For a list
of commands to manage subscriptions, send the command INFO REFCARD


The University of Aberdeen is a charity registered in Scotland, No
SC013683.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command SIGNOFF SPSSX-L For a list
of commands to manage subscriptions, send the command INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD