|
Hi,
It is often suggested that survey questionnaires should be designed with multiple items to assess a specified domain. This is based on the idea that there is a latent variable (eg attitude to immigration) that you measure by âsamplingâ (via multiple items) the attitude in question. The resulting measure (often a summated scale value) has some (estimable) error, which means it is an imperfect measure of the latent variable. On the other hand, we frequently see surveys (e.g. political opinion polls) that measure something via a single question (eg a question on whether you think Sarah Palin would be a good Vice-President). Further, it is not unusual to see the results of such single-item surveys reported with confidence intervals, to indicate the degree of error, implying a degree of legitimacy linked to the measure. Is there a way to resolve this apparent paradox? Regards, Clive. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
I'll take a crack at this one:
I'm not sure there is a paradox as much as different goals and differing risks of misinterpretation. You can ask people for their global feelings about someone/thing and you will get a response that reflects an off-the-cuff "gut" feeling. Recognizing that there are limits to what the answer to such a single question can tell you means that you can use it to make some limited inferences. You need to recognize that there is limited information in the responses and that the reliability of responses to a single question are going to be lower than a multi-item response measure would give you. But sometimes you don't need a high level of precision. Elections are a place where it seems that first/global impressions count more than careful reasoning. Measures of global impressions probably work reasonably well. (Unless you are intentionally wording the question to bias the answer.) However, the question "Do you like Palin?" is fairly useless for telling us which issues are driving voters or even why voters do/do not like Palin. To understand why voters feel as they do about Palin we need several questions. The talking head who declares that the reason McCain is doing poorly in the polls is because of Palin, or who declares that voters don't like Palin because she has five kids based on the single gut feeling question is grossly exaggerating the knowledge they can get from that single question. Michael **************************************************** Michael Granaas [hidden email] Assoc. Prof. Phone: 605 677 5295 Dept. of Psychology FAX: 605 677 3195 University of South Dakota 414 E. Clark St. Vermillion, SD 57069 ***************************************************** -----Original Message----- From: SPSSX(r) Discussion on behalf of Clive Downs Sent: Fri 10/31/08 4:34 AM To: [hidden email] Subject: Survey paradox? Hi, It is often suggested that survey questionnaires should be designed with multiple items to assess a specified domain. This is based on the idea that there is a latent variable (eg attitude to immigration) that you measure by â?~samplingâ?T (via multiple items) the attitude in question. The resulting measure (often a summated scale value) has some (estimable) error, which means it is an imperfect measure of the latent variable. On the other hand, we frequently see surveys (e.g. political opinion polls) that measure something via a single question (eg a question on whether you think Sarah Palin would be a good Vice-President). Further, it is not unusual to see the results of such single-item surveys reported with confidence intervals, to indicate the degree of error, implying a degree of legitimacy linked to the measure. Is there a way to resolve this apparent paradox? Regards, Clive. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD ====================To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
In reply to this post by Clive Downs
Hi - Please understand that this response is from a list lurker, and not
one of the more authoritative voices. Generally, I assume that the point-question has been "discovered" through some amount or degree of exploratory data analysis first. This could include CART, CHAID, correpondence or latent class analysis, factor or cluster techniques. Now either I'm naive enough to believe that's the case or its what I hope is happening behind the scenes but either way its why I always want to see the full question (as scripted) that was asked and a full response set when I see a point-question CI. Sampling design and other descriptive data are equally important in that sense. Usually I worry less about the validity of the point-question when it is part of a longitudinal study; that is, tracking changes or movements over time. True "discovery" in my opinion happens at the front end of research - before the researcher "knows" which questions to ask or even how to ask them. The trouble with the outputs of this stage is that responses are not ready for confirmatory analysis. But it sure beats being precisely wrong about one thing with a high degree or confidence or statistical validity. My clients constantly challenge me between what is intuitively obvious, what they already know to be "true", or what they have no clue about. It makes every project an adventure, but hardly a paradox. Regards, Leake Little InfoMotion LLC Clive Downs wrote: > Hi, > > It is often suggested that survey questionnaires should be designed with > multiple items to assess a specified domain. This is based on the idea that > there is a latent variable (eg attitude to immigration) that you measure > by ‘sampling’ (via multiple items) the attitude in question. The resulting > measure (often a summated scale value) has some (estimable) error, which > means it is an imperfect measure of the latent variable. > > On the other hand, we frequently see surveys (e.g. political opinion polls) > that measure something via a single question (eg a question on whether you > think Sarah Palin would be a good Vice-President). Further, it is not > unusual to see the results of such single-item surveys reported with > confidence intervals, to indicate the degree of error, implying a degree of > legitimacy linked to the measure. > > Is there a way to resolve this apparent paradox? > > Regards, > > Clive. > > ===================== > To manage your subscription to SPSSX-L, send a message to > [hidden email] (not to SPSSX-L), with no body text except the > command. To leave the list, send the command > SIGNOFF SPSSX-L > For a list of commands to manage subscriptions, send the command > INFO REFCARD > > ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
| Free forum by Nabble | Edit this page |
