Login  Register

Re: Normalizing scores

Posted by Ryan on Apr 29, 2013; 10:15am
URL: http://spssx-discussion.165.s1.nabble.com/do-repeat-tp5719707p5719808.html

Gary,

I do not recall you stating that a mini study should be conducted, but I suppose you did. Regardless, that is consistent with the crux of the problem. We are more or less on the same page. In order for one to consider employing your suggested regression approach, one would require that the same respondents be measured twice. (For the record, I would consider other psychometric methods). Relatedly, as John keenly pointed out, one questionnaire is likely measuring something different than the other. There are ways to evaluate this (which I will not expound upon at this point), but again, the current data do not allow for it, at least not in a way that I would be comfortable pursuing.

Best,

Ryan

On Apr 29, 2013, at 6:00 AM, Garry Gelade <[hidden email]> wrote:

Mike/Ryan

 

No I didn’t assume the same respondents were measured twice!  There would be no problem if that were the case.

What I suggested was a new ‘mini-survey’ (or pair of surveys) in which a representative sample of respondents ARE measured twice, in order to establish the regression realtionships between corresponding pairs of items in Wave 1 and Wave 2. 

 

Then you apply those relationships to the previous Wave2 survey data to predict what the Wave 1 score would have been (or vice-versa).

 

Garry

 

 

 

From: SPSSX(r) Discussion [[hidden email]] On Behalf Of MR
Sent: 28 April 2013 20:31
To: [hidden email]
Subject: Re: Normalizing scores

 

 

Ryan,

 

Yes, this is exactly how it was asked. Wave 1 we asked, please indicate level of satisfaction on cleanliness (on 5-point scale 1 being extremely dissatisfied and 5 being extremely satisfied). In wave 2 we asked, I am satisfied with the cleanliness  (on 5-point scale 1 being strongly disagree and 5 being strongly agree). 

 

Mike

 

 

On 2013-04-28, at 11:53 AM, Ryan Black <[hidden email]> wrote:



It still is unclear what you mean by "agreement scale" versus a satisfaction scale.

 

For example, in one questionnaire, did you have:

 

"I am satisfied with the cleanliness of the dining area" with response options ranging from strongly disagree-->strongly agree

 

which was changed to

 

"Please indicate the degree to which you were satisfied with the following aspects of the dining area:

1. cleanliness (very dissatisfied-->very satisfied)

.

.

 

You must provide as many details as possible in a succinct way when describing your design in your original post to SPSS-L or list members will either (a) ignore your post or (b) make assumptions which may or may not be true.

 

My guess is that Garry assumed that this was the same group of respondents measured twice, and therefore suggested an idea to see if there was a scaling factor across composite scores, but this simply does not apply to your scenario.

 

You have different questionnaires administered to different groups of people at different points in time.

 

My advice: Report the results separately. Do not conclude that the use of one questionnaire resulted in higher scores than use of another questionnaire. Accept the limitations of the study: You are asking different questions to different people at different points in time and therefore will likely obtain different responses. How are we to know/assume/evaluate that the questionnaires are measuring the same construct, given the study design? In the future, make sure that this flawed approach does not repeat itself.

 

Ryan

On Sun, Apr 28, 2013 at 11:03 AM, MR <[hidden email]> wrote:

Art,

 

Thanks for your response:

 

1. Respondents are different in both waves

2. We asked satisfaction on food, staff, and speed of service. We asked the same measures in wave 2 but on agreement scale.

3. This is a non-profit work for community hospital restaurant. Unfortunately the decision maker had his own hypothesis on scales. We debated a lot but he still went ahead with scale change. 

4. Yes, scale magnitude were same. 

 

Re: regression, I am really not getting my head around on regression part, is t1 the wave 1 score of food and t3 score of food in wave 2? What's the dependent variable here? Note that I cannot run repeated measures as it is not the same respondent. 

 

Thanks,

 

On 2013-04-28, at 8:46 AM, Art Kendall <[hidden email]> wrote:



do you have the same respondents in both waves?  Can you tie responses to individuals?

What did they agree with?

did you have a series of items with the same response scale to create a summative score, or do you have a single item?

You could do a regression as Gerry suggested. 
On later waves you could as for both measures of performance.  You would then have
t1 vs t3 for satisfaction and
t2 vs t3 for agreement

However, I do not think you can conclude at this time that performance dropped.   You can conclude that the way that you measured performance changed.

Who changed the response format?  Were the stems identical, similar?

Art Kendall
Social Research Consultants

On 4/27/2013 8:24 PM, MR [via SPSSX Discussion] wrote:

Team,

I have one problem on my hand and am running out of options on which statistics to use in SPSS. First, I know that the what I want to do is not advisable but trust me, I have fought my battle on this. This is what I want to achieve:

Issue: We did wave 1 survey using 5-point satisfaction scale. The second wave was conducted using 5-point agreement scale. Expectedly, top-box scores from agreement scale when compared to top-box score of satisfaction scale was low by 10% points. For e.g., agreement scale top box in wave 2 came out as 50% while wave 1 it was 60%.

Goal: I have compared the historical data and conclude that score difference is purely due to scale change. However, i want to normalize the wave 2 score so that I can compare with wave 1. I know this is not advisable but I have to do this. I googled but could not find any statistics that helps to normalize the scores - indeed I don't know where to begin. I need a scientific method to normalize the scores so that they are comparable. I don't want to conclude that performance dropped by 10% just because scale changed.

Your wisdom and help is very much required.

Thanks,
Mike

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD


If you reply to this email, your message will be added to the discussion below:

http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719790.html

To start a new topic under SPSSX Discussion, email [hidden email]
To unsubscribe from SPSSX Discussion, click here.
NAML

 

Art Kendall
Social Research Consultants

 


View this message in context: Re: Normalizing scores
Sent from the SPSSX Discussion mailing list archive at Nabble.com.