Fw: Re:

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Fw: Re:

John F Hall
Dan, Eins,
 
I have similar reservations about 0 - 10 scales (in Eins' case 0 - 9 or 1 - 10?).  Ornauer and Galtung used 1 - 9.  The European Social Survey uses 0-20!!
 
Did you see my response (see original message below) on the thread: 

sytax for no variance in survey item responding

(not my spelling!) to a similar question about scale length?  You can see links to the questionnaires and showcards used via http://surveyresearch.weebly.com/ssrc-survey-unit-quality-of-life-in-britain-surveys.html
 
Nick Moon of GfK thinks 4 points are enough (with no mid-point for ditherers).  When I was at SSRC in the early 70's Donald Monk (then at RSL in London) advised that scales with no evaluative content should be horizontal, but evaluative scales (eg "Good - Bad") should be vertical.  I vaguely remember toying with "asymmetric" scales in which the neutral point would be eg at point 3 on a 1 - 7 scale, but we never actually used one.  The format of our scales was changed from ladders to beads on the advice of Bill Belson (LSE): he suggested respondents tend to wander with unclear boundaries.
 
My feeling is that anything over 7 points is too long for respondents to use and there is also a problem of respondent fatigue (ie variance reduces in later responses).  On evaluative scales I think awarding stars or checking grumpy to smiley faces would work better than ladders, but I've never used them myself.
 
Finally, if the scale used has no zero point, total scores need to be adjusted by subtracting the number of items to give a true zero point, otherwise end readers tend to misinterpret the scores.
 
John Hall
 
 
----- Original Message -----
Sent: Friday, January 08, 2010 11:49 AM
Subject: Re: Re: sytax for no variance in survey item responding

35 years ago I faced a similar problem using an 11 point 0 (completely dissatisfied) to 10 (completely satisfied) scale on a hundred or so domains and sub-domains in the Quality of Life in Britain surveys SSRC Survey Unit Quality of Life Surveys I did with the late Dr Mark Abrams (SSRC Survey Unit) and in collaboration with the late Prof Angus Campbell (ISR, Ann Arbor).  We had done a pilot survey using 0-10 and another using a 1-7 scale (as per the USA survey) in which responses were highly (negatively) skewed and so we switched back to 0-10 to get ratings to spread out more, but this created a different problem.  The fullest account of these surveys is Subjective measures of quality of life in Britain: Some developments and trends (Hall 1976) (Social Trends No 7, HMSO, 1976)
 
I am not, and never was, a statistician, but it was clear to me from inspection of the raw data that some respondents were not using the full range of points eg some were using 0,5 and 10, others 2 thru 8 and others 5 thru 10 and so on.  I used exactly Jim Marks' syntax to create counts for all the points and then attempted to ipsatize the scores by calculating standardised scores for each individual based on their use of the scale.  The syntax for this is long lost, but the late Frank Andrews (ISR, Ann Arbor) was quite impressed and Dr Aubrey McKennell claimed that the ipsatized scores behaved in more or less the same way as the original ratings (as did the satisfaction ratings when weighted by the respondents' own importance ratings).
 
One thing that variance won't pick up is the pattern of usage by each respondent.  I went one stage further and did a count on the counts to see how many different points were used: it was a long time ago, but it must have been something like:
 
count zeros = <varlist1, varlist2 etc> (0).
count eights = ...
count nines =...
count tens= ...
count points = zeros to tens (1 thru hi)
 
  .. or in this example:
 
count points = ones to sevens (1 thru 20).
 
Another finding was that respondents using point 2 on a predictor subdomain were often scoring lower on the related main domain than respondents using 0, so Jim Ring wrote a special program to force MCA to retain the original order in modelling our findings.  There's an account of the latter in Appendix D Hall & Ring 1974 (figures and appendices) of our paper Hall & Ring 1974: Indicators of Environmental Quality and Life-Satisfaction: a subjective approach (International Sociological Association, Toronto 1974)
 
----- Original Message -----
Sent: Thursday, January 07, 2010 11:34 PM
Subject: Re: sytax for no variance in survey item responding

Taylor:

 

The easiest way would be use the count function.

 

not tested.

 

**find out how many answers in var1 to var20 have a value of 7 thru 1.

COUNT sevens = var1 TO var20 (7)

   /sixes = var1 TO var20 (6)

   /fives= var1 TO var20 (5)

   /fours = var1 TO var20 (4)

   /threes = var1 TO var20 (3)

   /twos = var1 TO var20 (2)

   /ones = var1 TO var20 (1)

.

 

** mark a case as suspicious if 15 or more answers are the same.

COMPUTE suspicious = ANY(15, sevens TO ones).

 

EXECUTE.

 

Note that this syntax uses the TO convention to specify all the variables between var1 and var20. You might need to list out the variables to be checked if they are not contiguous in the dataset.

 

Jim Marks

Director, Market Research

x1616

 

From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Poling, Taylor L
Sent: Thursday, January 07, 2010 3:07 PM
To: [hidden email]
Subject: sytax for no variance in survey item responding

 

 


Hi List,

 

Can anyone suggest some syntax for flagging cases where responses are all the same across a set of many variables(i.e., representing survey items) without having to write a giant, complex IF statement?

 

Thanks,

Taylor