do repeat

classic Classic list List threaded Threaded
33 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Normalizing scores

Art Kendall
do you have the same respondents in both waves?  Can you tie responses to individuals?

What did they agree with?

did you have a series of items with the same response scale to create a summative score, or do you have a single item?

You could do a regression as Gerry suggested. 
On later waves you could as for both measures of performance.  You would then have
t1 vs t3 for satisfaction and
t2 vs t3 for agreement

However, I do not think you can conclude at this time that performance dropped.   You can conclude that the way that you measured performance changed.

Who changed the response format?  Were the stems identical, similar?
Art Kendall
Social Research Consultants
On 4/27/2013 8:24 PM, MR [via SPSSX Discussion] wrote:
Team,

I have one problem on my hand and am running out of options on which statistics to use in SPSS. First, I know that the what I want to do is not advisable but trust me, I have fought my battle on this. This is what I want to achieve:

Issue: We did wave 1 survey using 5-point satisfaction scale. The second wave was conducted using 5-point agreement scale. Expectedly, top-box scores from agreement scale when compared to top-box score of satisfaction scale was low by 10% points. For e.g., agreement scale top box in wave 2 came out as 50% while wave 1 it was 60%.

Goal: I have compared the historical data and conclude that score difference is purely due to scale change. However, i want to normalize the wave 2 score so that I can compare with wave 1. I know this is not advisable but I have to do this. I googled but could not find any statistics that helps to normalize the scores - indeed I don't know where to begin. I need a scientific method to normalize the scores so that they are comparable. I don't want to conclude that performance dropped by 10% just because scale changed.

Your wisdom and help is very much required.

Thanks,
Mike

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD



If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719790.html
To start a new topic under SPSSX Discussion, email [hidden email]
To unsubscribe from SPSSX Discussion, click here.
NAML

Art Kendall
Social Research Consultants
Reply | Threaded
Open this post in threaded view
|

Re: Normalizing scores

Ryan
In reply to this post by MR
Are you dealing with a study design which allowed for different questionnaires (agreement versus satisfaction) to be administered to different groups of people at different points in time with the purpose to compare the composite scores after data collection? If yes, why would you think the responses are comparable in any way, shape or form? Why would you want to compare them anyway? What's the point?

Further, you believe people in the first wave scored higher on one questionnaire than people in the second wave because of the framing of the item(s) and/or response options in terms of satisfaction as opposed to agreement?

Please answer my questions, providing as much clarification as possible. Lay out the study design. Also, please give examples of the items and response options from each questionnaire.

Ryan

On Apr 27, 2013, at 8:22 PM, MR <[hidden email]> wrote:

> Team,
>
> I have one problem on my hand and am running out of options on which statistics to use in SPSS. First, I know that the what I want to do is not advisable but trust me, I have fought my battle on this. This is what I want to achieve:
>
> Issue: We did wave 1 survey using 5-point satisfaction scale. The second wave was conducted using 5-point agreement scale. Expectedly, top-box scores from agreement scale when compared to top-box score of satisfaction scale was low by 10% points. For e.g., agreement scale top box in wave 2 came out as 50% while wave 1 it was 60%.
>
> Goal: I have compared the historical data and conclude that score difference is purely due to scale change. However, i want to normalize the wave 2 score so that I can compare with wave 1. I know this is not advisable but I have to do this. I googled but could not find any statistics that helps to normalize the scores - indeed I don't know where to begin. I need a scientific method to normalize the scores so that they are comparable. I don't want to conclude that performance dropped by 10% just because scale changed.
>
> Your wisdom and help is very much required.
>
> Thanks,
> Mike
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
MR
Reply | Threaded
Open this post in threaded view
|

Re: Normalizing scores

MR
In reply to this post by Garry Gelade
Garry,

Can you explain me more on applying regression results to rescore? I have not come across such technique and would appreciate if you can throw some light.

Once I am done with this issue on hand, I am going to conduct a seperate study to measure the impact and will be more than happy to share the results with you all.


On 2013-04-28, at 7:30 AM, "Garry Gelade" <[hidden email]> wrote:

> Mike,
>
> The only thing I can think of is to run the survey on a subset of
> individuals (preferably a stratified random sample) using both forms of the
> scale. Then regress one score on the other. You can then apply the
> regression results to rescore your previous survey into the alternative
> scale form.
>
> Garry Gelade
>
>
> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of MR
> Sent: 28 April 2013 01:23
> To: [hidden email]
> Subject: Normalizing scores
>
> Team,
>
> I have one problem on my hand and am running out of options on which
> statistics to use in SPSS. First, I know that the what I want to do is not
> advisable but trust me, I have fought my battle on this. This is what I want
> to achieve:
>
> Issue: We did wave 1 survey using 5-point satisfaction scale. The second
> wave was conducted using 5-point agreement scale. Expectedly, top-box scores
> from agreement scale when compared to top-box score of satisfaction scale
> was low by 10% points. For e.g., agreement scale top box in wave 2 came out
> as 50% while wave 1 it was 60%.
>
> Goal: I have compared the historical data and conclude that score difference
> is purely due to scale change. However, i want to normalize the wave 2 score
> so that I can compare with wave 1. I know this is not advisable but I have
> to do this. I googled but could not find any statistics that helps to
> normalize the scores - indeed I don't know where to begin. I need a
> scientific method to normalize the scores so that they are comparable. I
> don't want to conclude that performance dropped by 10% just because scale
> changed.
>
> Your wisdom and help is very much required.
>
> Thanks,
> Mike
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command SIGNOFF SPSSX-L For a list of
> commands to manage subscriptions, send the command INFO REFCARD
>

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
MR
Reply | Threaded
Open this post in threaded view
|

Re: Normalizing scores

MR
In reply to this post by Art Kendall
Art,

Thanks for your response:

1. Respondents are different in both waves
2. We asked satisfaction on food, staff, and speed of service. We asked the same measures in wave 2 but on agreement scale.
3. This is a non-profit work for community hospital restaurant. Unfortunately the decision maker had his own hypothesis on scales. We debated a lot but he still went ahead with scale change. 
4. Yes, scale magnitude were same. 

Re: regression, I am really not getting my head around on regression part, is t1 the wave 1 score of food and t3 score of food in wave 2? What's the dependent variable here? Note that I cannot run repeated measures as it is not the same respondent. 

Thanks,
On 2013-04-28, at 8:46 AM, Art Kendall <[hidden email]> wrote:

do you have the same respondents in both waves?  Can you tie responses to individuals?

What did they agree with?

did you have a series of items with the same response scale to create a summative score, or do you have a single item?

You could do a regression as Gerry suggested. 
On later waves you could as for both measures of performance.  You would then have
t1 vs t3 for satisfaction and
t2 vs t3 for agreement

However, I do not think you can conclude at this time that performance dropped.   You can conclude that the way that you measured performance changed.

Who changed the response format?  Were the stems identical, similar?
Art Kendall
Social Research Consultants
On 4/27/2013 8:24 PM, MR [via SPSSX Discussion] wrote:
Team,

I have one problem on my hand and am running out of options on which statistics to use in SPSS. First, I know that the what I want to do is not advisable but trust me, I have fought my battle on this. This is what I want to achieve:

Issue: We did wave 1 survey using 5-point satisfaction scale. The second wave was conducted using 5-point agreement scale. Expectedly, top-box scores from agreement scale when compared to top-box score of satisfaction scale was low by 10% points. For e.g., agreement scale top box in wave 2 came out as 50% while wave 1 it was 60%.

Goal: I have compared the historical data and conclude that score difference is purely due to scale change. However, i want to normalize the wave 2 score so that I can compare with wave 1. I know this is not advisable but I have to do this. I googled but could not find any statistics that helps to normalize the scores - indeed I don't know where to begin. I need a scientific method to normalize the scores so that they are comparable. I don't want to conclude that performance dropped by 10% just because scale changed.

Your wisdom and help is very much required.

Thanks,
Mike

=====================
To manage your subscription to SPSSX-L, send a message to
<a moz-do-not-send="true" href="x-msg://49/user/SendEmail.jtp?type=node&amp;node=5719790&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD



If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719790.html
To start a new topic under SPSSX Discussion, email <a href="x-msg://49/user/SendEmail.jtp?type=node&amp;node=5719792&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email]
To unsubscribe from SPSSX Discussion, <a moz-do-not-send="true" href="x-msg://49/" target="_top" rel="nofollow" link="external">click here.
NAML

Art Kendall
Social Research Consultants


View this message in context: Re: Normalizing scores
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: Normalizing scores

Art Kendall
how many cases did you end up with at each time.
How many cases did you try to recruit at each time?



I am still not clear
whether or not the stimuli (question stems) were the same.
Did you change the wording on the questions and on the response scale? Please provide a couple of examples of the questions at the 2 times.

Say you did a third wave at time 3, (t3).  At time3  you would  measure both satisfaction and agreement
Call the first wave time 1 (t1), the second wave (t2),
You could then compare (throwing in some more uncertainty because the total package of stimuli would be different)
satisfaction t1 vs t3
agreement t2 vs t3.

You could then correlate agreement and satisfaction at t3.
A good scatterplot and crosstab would visualize for you what the relation between satisfaction and agreement is.

If I read Gerry' post, the approach would be
at t1 predict satisfaction from other variables maybe age, gender, whether the respondent was an employee or was visiting a patient, which meal, etc.
develop an equation.
Apply that equation to the data from t2 using the values for the IVs.
Look at the fit, visualize, residuals, etc.

at t2 predict agreement from other variables maybe age, gender, whether the respondent was an employee or was visiting a patient,  which meal,etc.
develop an equation.
Apply that equation to the data from t1 using the values for the IVs.
Look at the fit, visualize, residuals, etc.

Take a large dose of salt and make some guess about how comparable you think the two measures of performance.

Aside: I have some difficulties from using satisfaction and agreement   as measures of "performance" especially it they are the whole measure.


Art Kendall
Social Research Consultants
On 4/28/2013 11:06 AM, MR [via SPSSX Discussion] wrote:
Art,

Thanks for your response:

1. Respondents are different in both waves
2. We asked satisfaction on food, staff, and speed of service. We asked the same measures in wave 2 but on agreement scale.
3. This is a non-profit work for community hospital restaurant. Unfortunately the decision maker had his own hypothesis on scales. We debated a lot but he still went ahead with scale change. 
4. Yes, scale magnitude were same. 

Re: regression, I am really not getting my head around on regression part, is t1 the wave 1 score of food and t3 score of food in wave 2? What's the dependent variable here? Note that I cannot run repeated measures as it is not the same respondent. 

Thanks,
On 2013-04-28, at 8:46 AM, Art Kendall <[hidden email]> wrote:

do you have the same respondents in both waves?  Can you tie responses to individuals?

What did they agree with?

did you have a series of items with the same response scale to create a summative score, or do you have a single item?

You could do a regression as Gerry suggested. 
On later waves you could as for both measures of performance.  You would then have
t1 vs t3 for satisfaction and
t2 vs t3 for agreement

However, I do not think you can conclude at this time that performance dropped.   You can conclude that the way that you measured performance changed.

Who changed the response format?  Were the stems identical, similar?
Art Kendall
Social Research Consultants
On 4/27/2013 8:24 PM, MR [via SPSSX Discussion] wrote:
Team,

I have one problem on my hand and am running out of options on which statistics to use in SPSS. First, I know that the what I want to do is not advisable but trust me, I have fought my battle on this. This is what I want to achieve:

Issue: We did wave 1 survey using 5-point satisfaction scale. The second wave was conducted using 5-point agreement scale. Expectedly, top-box scores from agreement scale when compared to top-box score of satisfaction scale was low by 10% points. For e.g., agreement scale top box in wave 2 came out as 50% while wave 1 it was 60%.

Goal: I have compared the historical data and conclude that score difference is purely due to scale change. However, i want to normalize the wave 2 score so that I can compare with wave 1. I know this is not advisable but I have to do this. I googled but could not find any statistics that helps to normalize the scores - indeed I don't know where to begin. I need a scientific method to normalize the scores so that they are comparable. I don't want to conclude that performance dropped by 10% just because scale changed.

Your wisdom and help is very much required.

Thanks,
Mike

=====================
To manage your subscription to SPSSX-L, send a message to
<a moz-do-not-send="true" href="x-msg://49/user/SendEmail.jtp?type=node&amp;node=5719790&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD



If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719790.html
To start a new topic under SPSSX Discussion, email <a href="x-msg://49/user/SendEmail.jtp?type=node&amp;node=5719792&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email]
To unsubscribe from SPSSX Discussion, <a moz-do-not-send="true" href="x-msg://49/" target="_top" rel="nofollow" link="external">click here.
NAML

Art Kendall
Social Research Consultants


View this message in context: Re: Normalizing scores
Sent from the SPSSX Discussion mailing list archive at Nabble.com.




If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719795.html
To start a new topic under SPSSX Discussion, email [hidden email]
To unsubscribe from SPSSX Discussion, click here.
NAML

Art Kendall
Social Research Consultants
Reply | Threaded
Open this post in threaded view
|

Re: Normalizing scores

Ryan
In reply to this post by MR

It still is unclear what you mean by "agreement scale" versus a satisfaction scale.

 

For example, in one questionnaire, did you have:

 

"I am satisfied with the cleanliness of the dining area" with response options ranging from strongly disagree-->strongly agree

 

which was changed to

 

"Please indicate the degree to which you were satisfied with the following aspects of the dining area:

1. cleanliness (very dissatisfied-->very satisfied)

.

.

 

You must provide as many details as possible in a succinct way when describing your design in your original post to SPSS-L or list members will either (a) ignore your post or (b) make assumptions which may or may not be true.

 

My guess is that Garry assumed that this was the same group of respondents measured twice, and therefore suggested an idea to see if there was a scaling factor across composite scores, but this simply does not apply to your scenario.

 

You have different questionnaires administered to different groups of people at different points in time.

 

My advice: Report the results separately. Do not conclude that the use of one questionnaire resulted in higher scores than use of another questionnaire. Accept the limitations of the study: You are asking different questions to different people at different points in time and therefore will likely obtain different responses. How are we to know/assume/evaluate that the questionnaires are measuring the same construct, given the study design? In the future, make sure that this flawed approach does not repeat itself.

 

Ryan

On Sun, Apr 28, 2013 at 11:03 AM, MR <[hidden email]> wrote:
Art,

Thanks for your response:

1. Respondents are different in both waves
2. We asked satisfaction on food, staff, and speed of service. We asked the same measures in wave 2 but on agreement scale.
3. This is a non-profit work for community hospital restaurant. Unfortunately the decision maker had his own hypothesis on scales. We debated a lot but he still went ahead with scale change. 
4. Yes, scale magnitude were same. 

Re: regression, I am really not getting my head around on regression part, is t1 the wave 1 score of food and t3 score of food in wave 2? What's the dependent variable here? Note that I cannot run repeated measures as it is not the same respondent. 

Thanks,

On 2013-04-28, at 8:46 AM, Art Kendall <[hidden email]> wrote:

do you have the same respondents in both waves?  Can you tie responses to individuals?

What did they agree with?

did you have a series of items with the same response scale to create a summative score, or do you have a single item?

You could do a regression as Gerry suggested. 
On later waves you could as for both measures of performance.  You would then have
t1 vs t3 for satisfaction and
t2 vs t3 for agreement

However, I do not think you can conclude at this time that performance dropped.   You can conclude that the way that you measured performance changed.

Who changed the response format?  Were the stems identical, similar?
Art Kendall
Social Research Consultants
On 4/27/2013 8:24 PM, MR [via SPSSX Discussion] wrote:
Team,

I have one problem on my hand and am running out of options on which statistics to use in SPSS. First, I know that the what I want to do is not advisable but trust me, I have fought my battle on this. This is what I want to achieve:

Issue: We did wave 1 survey using 5-point satisfaction scale. The second wave was conducted using 5-point agreement scale. Expectedly, top-box scores from agreement scale when compared to top-box score of satisfaction scale was low by 10% points. For e.g., agreement scale top box in wave 2 came out as 50% while wave 1 it was 60%.

Goal: I have compared the historical data and conclude that score difference is purely due to scale change. However, i want to normalize the wave 2 score so that I can compare with wave 1. I know this is not advisable but I have to do this. I googled but could not find any statistics that helps to normalize the scores - indeed I don't know where to begin. I need a scientific method to normalize the scores so that they are comparable. I don't want to conclude that performance dropped by 10% just because scale changed.

Your wisdom and help is very much required.

Thanks,
Mike

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD



If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719790.html
To start a new topic under SPSSX Discussion, email [hidden email]
To unsubscribe from SPSSX Discussion, click here.
NAML

Art Kendall
Social Research Consultants


View this message in context: Re: Normalizing scores
Sent from the SPSSX Discussion mailing list archive at Nabble.com.


MR
Reply | Threaded
Open this post in threaded view
|

Re: Normalizing scores

MR

Ryan,

Yes, this is exactly how it was asked. Wave 1 we asked, please indicate level of satisfaction on cleanliness (on 5-point scale 1 being extremely dissatisfied and 5 being extremely satisfied). In wave 2 we asked, I am satisfied with the cleanliness  (on 5-point scale 1 being strongly disagree and 5 being strongly agree). 

Mike


On 2013-04-28, at 11:53 AM, Ryan Black <[hidden email]> wrote:

It still is unclear what you mean by "agreement scale" versus a satisfaction scale.

 

For example, in one questionnaire, did you have:

 

"I am satisfied with the cleanliness of the dining area" with response options ranging from strongly disagree-->strongly agree

 

which was changed to

 

"Please indicate the degree to which you were satisfied with the following aspects of the dining area:
1. cleanliness (very dissatisfied-->very satisfied)
.
.

 

You must provide as many details as possible in a succinct way when describing your design in your original post to SPSS-L or list members will either (a) ignore your post or (b) make assumptions which may or may not be true.

 

My guess is that Garry assumed that this was the same group of respondents measured twice, and therefore suggested an idea to see if there was a scaling factor across composite scores, but this simply does not apply to your scenario.

 

You have different questionnaires administered to different groups of people at different points in time.

 

My advice: Report the results separately. Do not conclude that the use of one questionnaire resulted in higher scores than use of another questionnaire. Accept the limitations of the study: You are asking different questions to different people at different points in time and therefore will likely obtain different responses. How are we to know/assume/evaluate that the questionnaires are measuring the same construct, given the study design? In the future, make sure that this flawed approach does not repeat itself.

 

Ryan

On Sun, Apr 28, 2013 at 11:03 AM, MR <[hidden email]> wrote:
Art,

Thanks for your response:

1. Respondents are different in both waves
2. We asked satisfaction on food, staff, and speed of service. We asked the same measures in wave 2 but on agreement scale.
3. This is a non-profit work for community hospital restaurant. Unfortunately the decision maker had his own hypothesis on scales. We debated a lot but he still went ahead with scale change. 
4. Yes, scale magnitude were same. 

Re: regression, I am really not getting my head around on regression part, is t1 the wave 1 score of food and t3 score of food in wave 2? What's the dependent variable here? Note that I cannot run repeated measures as it is not the same respondent. 

Thanks,

On 2013-04-28, at 8:46 AM, Art Kendall <[hidden email]> wrote:

do you have the same respondents in both waves?  Can you tie responses to individuals?

What did they agree with?

did you have a series of items with the same response scale to create a summative score, or do you have a single item?

You could do a regression as Gerry suggested. 
On later waves you could as for both measures of performance.  You would then have
t1 vs t3 for satisfaction and
t2 vs t3 for agreement

However, I do not think you can conclude at this time that performance dropped.   You can conclude that the way that you measured performance changed.

Who changed the response format?  Were the stems identical, similar?
Art Kendall
Social Research Consultants
On 4/27/2013 8:24 PM, MR [via SPSSX Discussion] wrote:
Team,

I have one problem on my hand and am running out of options on which statistics to use in SPSS. First, I know that the what I want to do is not advisable but trust me, I have fought my battle on this. This is what I want to achieve:

Issue: We did wave 1 survey using 5-point satisfaction scale. The second wave was conducted using 5-point agreement scale. Expectedly, top-box scores from agreement scale when compared to top-box score of satisfaction scale was low by 10% points. For e.g., agreement scale top box in wave 2 came out as 50% while wave 1 it was 60%.

Goal: I have compared the historical data and conclude that score difference is purely due to scale change. However, i want to normalize the wave 2 score so that I can compare with wave 1. I know this is not advisable but I have to do this. I googled but could not find any statistics that helps to normalize the scores - indeed I don't know where to begin. I need a scientific method to normalize the scores so that they are comparable. I don't want to conclude that performance dropped by 10% just because scale changed.

Your wisdom and help is very much required.

Thanks,
Mike

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD



If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719790.html
To start a new topic under SPSSX Discussion, email [hidden email]
To unsubscribe from SPSSX Discussion, click here.
NAML

Art Kendall
Social Research Consultants


View this message in context: Re: Normalizing scores
Sent from the SPSSX Discussion mailing list archive at Nabble.com.



MR
Reply | Threaded
Open this post in threaded view
|

Re: Normalizing scores

MR
In reply to this post by Art Kendall
Art,

We have more than 5000 cases each wave. We only have wave 1 (satisfaction) and wave 2 (agreement scale). Below is how the questions were asked:

Wave 1: Please select your level of satisfaction with following attributes. Please respond using 1-5 scale where 1 means you are extremely dissatisfied while 5 means you are extremely satisfied. 
Attribute: Cleanliness of restaurant

Wave 2: Please select your level of agreement with the following attributes. Please respond 1-5 scale below where 1 means you strongly disagree while 5 means you strongly agree.  
Attribute: Restaurant was clean

So if I understand correctly, below are the steps:

1. Wave 1 Cleanliness score as DV and age, gender, income etc. as IV and build equation (What if the r2 comes lower and there is large unexplained variation?)
2. Use equation from step 1 and predict score for Wave 2 cleanliness
3. Repeat above but use wave 2 score to build the equation
4. However, after looking at the fit, how would I normalize the Wave 1 score? 

Essentially, I want to number by which I can adjust wave 1 or 2 score. For example, reduce wave 1 top box scores by 5% to make it comparable to wave 2. How would I get this number?
Mike

On 2013-04-28, at 11:41 AM, Art Kendall <[hidden email]> wrote:

how many cases did you end up with at each time.
How many cases did you try to recruit at each time?



I am still not clear
whether or not the stimuli (question stems) were the same.
Did you change the wording on the questions and on the response scale? Please provide a couple of examples of the questions at the 2 times.

Say you did a third wave at time 3, (t3).  At time3  you would  measure both satisfaction and agreement
Call the first wave time 1 (t1), the second wave (t2),
You could then compare (throwing in some more uncertainty because the total package of stimuli would be different)
satisfaction t1 vs t3
agreement t2 vs t3.

You could then correlate agreement and satisfaction at t3.
A good scatterplot and crosstab would visualize for you what the relation between satisfaction and agreement is.

If I read Gerry' post, the approach would be
at t1 predict satisfaction from other variables maybe age, gender, whether the respondent was an employee or was visiting a patient, which meal, etc.
develop an equation.
Apply that equation to the data from t2 using the values for the IVs.
Look at the fit, visualize, residuals, etc.

at t2 predict agreement from other variables maybe age, gender, whether the respondent was an employee or was visiting a patient,  which meal,etc.
develop an equation.
Apply that equation to the data from t1 using the values for the IVs.
Look at the fit, visualize, residuals, etc.

Take a large dose of salt and make some guess about how comparable you think the two measures of performance.

Aside: I have some difficulties from using satisfaction and agreement   as measures of "performance" especially it they are the whole measure.


Art Kendall
Social Research Consultants
On 4/28/2013 11:06 AM, MR [via SPSSX Discussion] wrote:
Art,

Thanks for your response:

1. Respondents are different in both waves
2. We asked satisfaction on food, staff, and speed of service. We asked the same measures in wave 2 but on agreement scale.
3. This is a non-profit work for community hospital restaurant. Unfortunately the decision maker had his own hypothesis on scales. We debated a lot but he still went ahead with scale change. 
4. Yes, scale magnitude were same. 

Re: regression, I am really not getting my head around on regression part, is t1 the wave 1 score of food and t3 score of food in wave 2? What's the dependent variable here? Note that I cannot run repeated measures as it is not the same respondent. 

Thanks,
On 2013-04-28, at 8:46 AM, Art Kendall <<a moz-do-not-send="true" href="x-msg://43/user/SendEmail.jtp?type=node&amp;node=5719795&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email]> wrote:

do you have the same respondents in both waves?  Can you tie responses to individuals?

What did they agree with?

did you have a series of items with the same response scale to create a summative score, or do you have a single item?

You could do a regression as Gerry suggested. 
On later waves you could as for both measures of performance.  You would then have
t1 vs t3 for satisfaction and
t2 vs t3 for agreement

However, I do not think you can conclude at this time that performance dropped.   You can conclude that the way that you measured performance changed.

Who changed the response format?  Were the stems identical, similar?
Art Kendall
Social Research Consultants
On 4/27/2013 8:24 PM, MR [via SPSSX Discussion] wrote:
Team,

I have one problem on my hand and am running out of options on which statistics to use in SPSS. First, I know that the what I want to do is not advisable but trust me, I have fought my battle on this. This is what I want to achieve:

Issue: We did wave 1 survey using 5-point satisfaction scale. The second wave was conducted using 5-point agreement scale. Expectedly, top-box scores from agreement scale when compared to top-box score of satisfaction scale was low by 10% points. For e.g., agreement scale top box in wave 2 came out as 50% while wave 1 it was 60%.

Goal: I have compared the historical data and conclude that score difference is purely due to scale change. However, i want to normalize the wave 2 score so that I can compare with wave 1. I know this is not advisable but I have to do this. I googled but could not find any statistics that helps to normalize the scores - indeed I don't know where to begin. I need a scientific method to normalize the scores so that they are comparable. I don't want to conclude that performance dropped by 10% just because scale changed.

Your wisdom and help is very much required.

Thanks,
Mike

=====================
To manage your subscription to SPSSX-L, send a message to
<a moz-do-not-send="true" href="<a href="x-msg://49/user/SendEmail.jtp?type=node&amp;amp;node=5719790&amp;amp;i=0">x-msg://49/user/SendEmail.jtp?type=node&amp;node=5719790&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD



If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719790.html
To start a new topic under SPSSX Discussion, email <a href="<a href="x-msg://49/user/SendEmail.jtp?type=node&amp;amp;node=5719792&amp;amp;i=0">x-msg://49/user/SendEmail.jtp?type=node&amp;node=5719792&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email]
To unsubscribe from SPSSX Discussion, <a moz-do-not-send="true" href="<a href="x-msg://49/">x-msg://49/" target="_top" rel="nofollow" link="external">click here.
NAML

Art Kendall
Social Research Consultants


View this message in context: Re: Normalizing scores
Sent from the SPSSX Discussion mailing list archive at Nabble.com.




If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719795.html
To start a new topic under SPSSX Discussion, email <a href="x-msg://43/user/SendEmail.jtp?type=node&amp;node=5719796&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email]
To unsubscribe from SPSSX Discussion, <a moz-do-not-send="true" href="x-msg://43/" target="_top" rel="nofollow" link="external">click here.
NAML

Art Kendall
Social Research Consultants


View this message in context: Re: Normalizing scores
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: Normalizing scores

John F Hall

Mike

 

You are also confusing a “semi-factual” judgment with a “value” judgment: next time ask both. 

 

How were the samples generated?  If they were self-selected at the till, the response rate is probably dismal and the results worthless anyway except as a PR/marketing ploy.  If you only use one question in future, I’d stick to agree-disagree or even a simple yes-no.  I worked with satisfaction questions for many years and always felt they were a constant rather than a variable.  Things had to be really bad to get significant dissatisfaction ratings.

 

Do you ask whether clients would come to this restaurant again or whether they would recommend it to their friends?  Simple Yes-No and index = %yes - %no (range -100 to +100)

 

 

John F Hall (Mr)

[Retired academic survey researcher]

 

Email:   [hidden email] 

Website: www.surveyresearch.weebly.com

Start page:  www.surveyresearch.weebly.com/spss-without-tears.html

  

 

 

 

 

 

From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of MR
Sent: 29 April 2013 00:02
To: [hidden email]
Subject: Re: Normalizing scores

 

Art,

 

We have more than 5000 cases each wave. We only have wave 1 (satisfaction) and wave 2 (agreement scale). Below is how the questions were asked:

 

Wave 1: Please select your level of satisfaction with following attributes. Please respond using 1-5 scale where 1 means you are extremely dissatisfied while 5 means you are extremely satisfied. 

Attribute: Cleanliness of restaurant

 

Wave 2: Please select your level of agreement with the following attributes. Please respond 1-5 scale below where 1 means you strongly disagree while 5 means you strongly agree.  

Attribute: Restaurant was clean

 

So if I understand correctly, below are the steps:

 

1. Wave 1 Cleanliness score as DV and age, gender, income etc. as IV and build equation (What if the r2 comes lower and there is large unexplained variation?)

2. Use equation from step 1 and predict score for Wave 2 cleanliness

3. Repeat above but use wave 2 score to build the equation

4. However, after looking at the fit, how would I normalize the Wave 1 score? 

 

Essentially, I want to number by which I can adjust wave 1 or 2 score. For example, reduce wave 1 top box scores by 5% to make it comparable to wave 2. How would I get this number?

Mike

 

On 2013-04-28, at 11:41 AM, Art Kendall <[hidden email]> wrote:



how many cases did you end up with at each time.
How many cases did you try to recruit at each time?



I am still not clear
whether or not the stimuli (question stems) were the same.
Did you change the wording on the questions and on the response scale? Please provide a couple of examples of the questions at the 2 times.

Say you did a third wave at time 3, (t3).  At time3  you would  measure both satisfaction and agreement
Call the first wave time 1 (t1), the second wave (t2),
You could then compare (throwing in some more uncertainty because the total package of stimuli would be different)
satisfaction t1 vs t3
agreement t2 vs t3.

You could then correlate agreement and satisfaction at t3.
A good scatterplot and crosstab would visualize for you what the relation between satisfaction and agreement is.

If I read Gerry' post, the approach would be
at t1 predict satisfaction from other variables maybe age, gender, whether the respondent was an employee or was visiting a patient, which meal, etc.
develop an equation.
Apply that equation to the data from t2 using the values for the IVs.
Look at the fit, visualize, residuals, etc.

at t2 predict agreement from other variables maybe age, gender, whether the respondent was an employee or was visiting a patient,  which meal,etc.
develop an equation.
Apply that equation to the data from t1 using the values for the IVs.
Look at the fit, visualize, residuals, etc.

Take a large dose of salt and make some guess about how comparable you think the two measures of performance.

Aside: I have some difficulties from using satisfaction and agreement   as measures of "performance" especially it they are the whole measure.



Art Kendall
Social Research Consultants

On 4/28/2013 11:06 AM, MR [via SPSSX Discussion] wrote:

Art,

 

Thanks for your response:

 

1. Respondents are different in both waves

2. We asked satisfaction on food, staff, and speed of service. We asked the same measures in wave 2 but on agreement scale.

3. This is a non-profit work for community hospital restaurant. Unfortunately the decision maker had his own hypothesis on scales. We debated a lot but he still went ahead with scale change. 

4. Yes, scale magnitude were same. 

 

Re: regression, I am really not getting my head around on regression part, is t1 the wave 1 score of food and t3 score of food in wave 2? What's the dependent variable here? Note that I cannot run repeated measures as it is not the same respondent. 

 

Thanks,

On 2013-04-28, at 8:46 AM, Art Kendall <<a href="x-msg://43/user/SendEmail.jtp?type=node&amp;node=5719795&amp;i=0" target="_top">[hidden email]> wrote:



do you have the same respondents in both waves?  Can you tie responses to individuals?

What did they agree with?

did you have a series of items with the same response scale to create a summative score, or do you have a single item?

You could do a regression as Gerry suggested. 
On later waves you could as for both measures of performance.  You would then have
t1 vs t3 for satisfaction and
t2 vs t3 for agreement

However, I do not think you can conclude at this time that performance dropped.   You can conclude that the way that you measured performance changed.

Who changed the response format?  Were the stems identical, similar?

Art Kendall
Social Research Consultants

On 4/27/2013 8:24 PM, MR [via SPSSX Discussion] wrote:

Team,

I have one problem on my hand and am running out of options on which statistics to use in SPSS. First, I know that the what I want to do is not advisable but trust me, I have fought my battle on this. This is what I want to achieve:

Issue: We did wave 1 survey using 5-point satisfaction scale. The second wave was conducted using 5-point agreement scale. Expectedly, top-box scores from agreement scale when compared to top-box score of satisfaction scale was low by 10% points. For e.g., agreement scale top box in wave 2 came out as 50% while wave 1 it was 60%.

Goal: I have compared the historical data and conclude that score difference is purely due to scale change. However, i want to normalize the wave 2 score so that I can compare with wave 1. I know this is not advisable but I have to do this. I googled but could not find any statistics that helps to normalize the scores - indeed I don't know where to begin. I need a scientific method to normalize the scores so that they are comparable. I don't want to conclude that performance dropped by 10% just because scale changed.

Your wisdom and help is very much required.

Thanks,
Mike

=====================
To manage your subscription to SPSSX-L, send a message to
<a moz-do-not-send="true" href="<a href="x-msg://49/user/SendEmail.jtp?type=node&amp;amp;node=5719790&amp;amp;i=0">x-msg://49/user/SendEmail.jtp?type=node&amp;node=5719790&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD


If you reply to this email, your message will be added to the discussion below:

http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719790.html

To start a new topic under SPSSX Discussion, email <a href="<a href="x-msg://49/user/SendEmail.jtp?type=node&amp;amp;node=5719792&amp;amp;i=0">x-msg://49/user/SendEmail.jtp?type=node&amp;node=5719792&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email]
To unsubscribe from SPSSX Discussion, <a moz-do-not-send="true" href="<a href="x-msg://49/">x-msg://49/" target="_top" rel="nofollow" link="external">click here.
NAML

 

Art Kendall
Social Research Consultants

 


View this message in context: Re: Normalizing scores
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

 

 


If you reply to this email, your message will be added to the discussion below:

http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719795.html

To start a new topic under SPSSX Discussion, email <a href="x-msg://43/user/SendEmail.jtp?type=node&amp;node=5719796&amp;i=0" target="_top">[hidden email]
To unsubscribe from SPSSX Discussion, <a href="x-msg://43/" target="_top">click here.
NAML

 

Art Kendall
Social Research Consultants

 


View this message in context: Re: Normalizing scores
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

 

Reply | Threaded
Open this post in threaded view
|

Re: Normalizing scores

Garry Gelade
In reply to this post by MR

Mike/Ryan

 

No I didn’t assume the same respondents were measured twice!  There would be no problem if that were the case.

What I suggested was a new ‘mini-survey’ (or pair of surveys) in which a representative sample of respondents ARE measured twice, in order to establish the regression realtionships between corresponding pairs of items in Wave 1 and Wave 2. 

 

Then you apply those relationships to the previous Wave2 survey data to predict what the Wave 1 score would have been (or vice-versa).

 

Garry

 

 

 

From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of MR
Sent: 28 April 2013 20:31
To: [hidden email]
Subject: Re: Normalizing scores

 

 

Ryan,

 

Yes, this is exactly how it was asked. Wave 1 we asked, please indicate level of satisfaction on cleanliness (on 5-point scale 1 being extremely dissatisfied and 5 being extremely satisfied). In wave 2 we asked, I am satisfied with the cleanliness  (on 5-point scale 1 being strongly disagree and 5 being strongly agree). 

 

Mike

 

 

On 2013-04-28, at 11:53 AM, Ryan Black <[hidden email]> wrote:



It still is unclear what you mean by "agreement scale" versus a satisfaction scale.

 

For example, in one questionnaire, did you have:

 

"I am satisfied with the cleanliness of the dining area" with response options ranging from strongly disagree-->strongly agree

 

which was changed to

 

"Please indicate the degree to which you were satisfied with the following aspects of the dining area:

1. cleanliness (very dissatisfied-->very satisfied)

.

.

 

You must provide as many details as possible in a succinct way when describing your design in your original post to SPSS-L or list members will either (a) ignore your post or (b) make assumptions which may or may not be true.

 

My guess is that Garry assumed that this was the same group of respondents measured twice, and therefore suggested an idea to see if there was a scaling factor across composite scores, but this simply does not apply to your scenario.

 

You have different questionnaires administered to different groups of people at different points in time.

 

My advice: Report the results separately. Do not conclude that the use of one questionnaire resulted in higher scores than use of another questionnaire. Accept the limitations of the study: You are asking different questions to different people at different points in time and therefore will likely obtain different responses. How are we to know/assume/evaluate that the questionnaires are measuring the same construct, given the study design? In the future, make sure that this flawed approach does not repeat itself.

 

Ryan

On Sun, Apr 28, 2013 at 11:03 AM, MR <[hidden email]> wrote:

Art,

 

Thanks for your response:

 

1. Respondents are different in both waves

2. We asked satisfaction on food, staff, and speed of service. We asked the same measures in wave 2 but on agreement scale.

3. This is a non-profit work for community hospital restaurant. Unfortunately the decision maker had his own hypothesis on scales. We debated a lot but he still went ahead with scale change. 

4. Yes, scale magnitude were same. 

 

Re: regression, I am really not getting my head around on regression part, is t1 the wave 1 score of food and t3 score of food in wave 2? What's the dependent variable here? Note that I cannot run repeated measures as it is not the same respondent. 

 

Thanks,

 

On 2013-04-28, at 8:46 AM, Art Kendall <[hidden email]> wrote:



do you have the same respondents in both waves?  Can you tie responses to individuals?

What did they agree with?

did you have a series of items with the same response scale to create a summative score, or do you have a single item?

You could do a regression as Gerry suggested. 
On later waves you could as for both measures of performance.  You would then have
t1 vs t3 for satisfaction and
t2 vs t3 for agreement

However, I do not think you can conclude at this time that performance dropped.   You can conclude that the way that you measured performance changed.

Who changed the response format?  Were the stems identical, similar?

Art Kendall
Social Research Consultants

On 4/27/2013 8:24 PM, MR [via SPSSX Discussion] wrote:

Team,

I have one problem on my hand and am running out of options on which statistics to use in SPSS. First, I know that the what I want to do is not advisable but trust me, I have fought my battle on this. This is what I want to achieve:

Issue: We did wave 1 survey using 5-point satisfaction scale. The second wave was conducted using 5-point agreement scale. Expectedly, top-box scores from agreement scale when compared to top-box score of satisfaction scale was low by 10% points. For e.g., agreement scale top box in wave 2 came out as 50% while wave 1 it was 60%.

Goal: I have compared the historical data and conclude that score difference is purely due to scale change. However, i want to normalize the wave 2 score so that I can compare with wave 1. I know this is not advisable but I have to do this. I googled but could not find any statistics that helps to normalize the scores - indeed I don't know where to begin. I need a scientific method to normalize the scores so that they are comparable. I don't want to conclude that performance dropped by 10% just because scale changed.

Your wisdom and help is very much required.

Thanks,
Mike

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD


If you reply to this email, your message will be added to the discussion below:

http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719790.html

To start a new topic under SPSSX Discussion, email [hidden email]
To unsubscribe from SPSSX Discussion, click here.
NAML

 

Art Kendall
Social Research Consultants

 


View this message in context: Re: Normalizing scores
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

 

 

 

Reply | Threaded
Open this post in threaded view
|

Automatic reply: Normalizing scores

Jo Fennessey
I will be away from the office until Wednesday 5/1/13, but will check email several times a day. 
Reply | Threaded
Open this post in threaded view
|

Re: Normalizing scores

Ryan
In reply to this post by Garry Gelade
Gary,

I do not recall you stating that a mini study should be conducted, but I suppose you did. Regardless, that is consistent with the crux of the problem. We are more or less on the same page. In order for one to consider employing your suggested regression approach, one would require that the same respondents be measured twice. (For the record, I would consider other psychometric methods). Relatedly, as John keenly pointed out, one questionnaire is likely measuring something different than the other. There are ways to evaluate this (which I will not expound upon at this point), but again, the current data do not allow for it, at least not in a way that I would be comfortable pursuing.

Best,

Ryan

On Apr 29, 2013, at 6:00 AM, Garry Gelade <[hidden email]> wrote:

Mike/Ryan

 

No I didn’t assume the same respondents were measured twice!  There would be no problem if that were the case.

What I suggested was a new ‘mini-survey’ (or pair of surveys) in which a representative sample of respondents ARE measured twice, in order to establish the regression realtionships between corresponding pairs of items in Wave 1 and Wave 2. 

 

Then you apply those relationships to the previous Wave2 survey data to predict what the Wave 1 score would have been (or vice-versa).

 

Garry

 

 

 

From: SPSSX(r) Discussion [[hidden email]] On Behalf Of MR
Sent: 28 April 2013 20:31
To: [hidden email]
Subject: Re: Normalizing scores

 

 

Ryan,

 

Yes, this is exactly how it was asked. Wave 1 we asked, please indicate level of satisfaction on cleanliness (on 5-point scale 1 being extremely dissatisfied and 5 being extremely satisfied). In wave 2 we asked, I am satisfied with the cleanliness  (on 5-point scale 1 being strongly disagree and 5 being strongly agree). 

 

Mike

 

 

On 2013-04-28, at 11:53 AM, Ryan Black <[hidden email]> wrote:



It still is unclear what you mean by "agreement scale" versus a satisfaction scale.

 

For example, in one questionnaire, did you have:

 

"I am satisfied with the cleanliness of the dining area" with response options ranging from strongly disagree-->strongly agree

 

which was changed to

 

"Please indicate the degree to which you were satisfied with the following aspects of the dining area:

1. cleanliness (very dissatisfied-->very satisfied)

.

.

 

You must provide as many details as possible in a succinct way when describing your design in your original post to SPSS-L or list members will either (a) ignore your post or (b) make assumptions which may or may not be true.

 

My guess is that Garry assumed that this was the same group of respondents measured twice, and therefore suggested an idea to see if there was a scaling factor across composite scores, but this simply does not apply to your scenario.

 

You have different questionnaires administered to different groups of people at different points in time.

 

My advice: Report the results separately. Do not conclude that the use of one questionnaire resulted in higher scores than use of another questionnaire. Accept the limitations of the study: You are asking different questions to different people at different points in time and therefore will likely obtain different responses. How are we to know/assume/evaluate that the questionnaires are measuring the same construct, given the study design? In the future, make sure that this flawed approach does not repeat itself.

 

Ryan

On Sun, Apr 28, 2013 at 11:03 AM, MR <[hidden email]> wrote:

Art,

 

Thanks for your response:

 

1. Respondents are different in both waves

2. We asked satisfaction on food, staff, and speed of service. We asked the same measures in wave 2 but on agreement scale.

3. This is a non-profit work for community hospital restaurant. Unfortunately the decision maker had his own hypothesis on scales. We debated a lot but he still went ahead with scale change. 

4. Yes, scale magnitude were same. 

 

Re: regression, I am really not getting my head around on regression part, is t1 the wave 1 score of food and t3 score of food in wave 2? What's the dependent variable here? Note that I cannot run repeated measures as it is not the same respondent. 

 

Thanks,

 

On 2013-04-28, at 8:46 AM, Art Kendall <[hidden email]> wrote:



do you have the same respondents in both waves?  Can you tie responses to individuals?

What did they agree with?

did you have a series of items with the same response scale to create a summative score, or do you have a single item?

You could do a regression as Gerry suggested. 
On later waves you could as for both measures of performance.  You would then have
t1 vs t3 for satisfaction and
t2 vs t3 for agreement

However, I do not think you can conclude at this time that performance dropped.   You can conclude that the way that you measured performance changed.

Who changed the response format?  Were the stems identical, similar?

Art Kendall
Social Research Consultants

On 4/27/2013 8:24 PM, MR [via SPSSX Discussion] wrote:

Team,

I have one problem on my hand and am running out of options on which statistics to use in SPSS. First, I know that the what I want to do is not advisable but trust me, I have fought my battle on this. This is what I want to achieve:

Issue: We did wave 1 survey using 5-point satisfaction scale. The second wave was conducted using 5-point agreement scale. Expectedly, top-box scores from agreement scale when compared to top-box score of satisfaction scale was low by 10% points. For e.g., agreement scale top box in wave 2 came out as 50% while wave 1 it was 60%.

Goal: I have compared the historical data and conclude that score difference is purely due to scale change. However, i want to normalize the wave 2 score so that I can compare with wave 1. I know this is not advisable but I have to do this. I googled but could not find any statistics that helps to normalize the scores - indeed I don't know where to begin. I need a scientific method to normalize the scores so that they are comparable. I don't want to conclude that performance dropped by 10% just because scale changed.

Your wisdom and help is very much required.

Thanks,
Mike

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD


If you reply to this email, your message will be added to the discussion below:

http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719790.html

To start a new topic under SPSSX Discussion, email [hidden email]
To unsubscribe from SPSSX Discussion, click here.
NAML

 

Art Kendall
Social Research Consultants

 


View this message in context: Re: Normalizing scores
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

 

 

 

Reply | Threaded
Open this post in threaded view
|

Re: Normalizing scores

Art Kendall
In reply to this post by MR
If this is a very crucial point, doing a further study is the only way I can think of to get handle on the adjustment of the scores so they are equivalent.  That would be a very slippery handle.  Even if you were to do a third study where you measured both satisfaction and agreement there would be some additional uncertainty introduced from the change in the overall data gathering instrument. Nonresponse rates are notoriously related to the number of questions asked.

There is an old saying "You cannot polish pig iron." (Pig iron is the rough block that is cast from a smelter before any work in done on the iron.)

I strongly suggest that you file this experience under "lessons learned" and say "Due to technical problems we cannot tell whether performance has changed."

I think of statistics as an aid to rhetoric (in the old sense of the word).  They help us make reasoned points in an explanation or description.  The arguments here are very weak. Any conclusion would have a great deal of non-sampling uncertainty.

Also when reasoning from survey results it is essential to report not only the achieved "sample" size, but also how many attempts there were at recruiting respondents.  It is also important to consider whether you have a good scientific sample or just a set of cases that volunteered. Large numbers of cases do not make up for unscientific methods of cases selection.
Art Kendall
Social Research Consultants
On 4/28/2013 6:03 PM, MR [via SPSSX Discussion] wrote:
Art,

We have more than 5000 cases each wave. We only have wave 1 (satisfaction) and wave 2 (agreement scale). Below is how the questions were asked:

Wave 1: Please select your level of satisfaction with following attributes. Please respond using 1-5 scale where 1 means you are extremely dissatisfied while 5 means you are extremely satisfied. 
Attribute: Cleanliness of restaurant

Wave 2: Please select your level of agreement with the following attributes. Please respond 1-5 scale below where 1 means you strongly disagree while 5 means you strongly agree.  
Attribute: Restaurant was clean

So if I understand correctly, below are the steps:

1. Wave 1 Cleanliness score as DV and age, gender, income etc. as IV and build equation (What if the r2 comes lower and there is large unexplained variation?)
2. Use equation from step 1 and predict score for Wave 2 cleanliness
3. Repeat above but use wave 2 score to build the equation
4. However, after looking at the fit, how would I normalize the Wave 1 score? 

Essentially, I want to number by which I can adjust wave 1 or 2 score. For example, reduce wave 1 top box scores by 5% to make it comparable to wave 2. How would I get this number?
Mike

On 2013-04-28, at 11:41 AM, Art Kendall <[hidden email]> wrote:

how many cases did you end up with at each time.
How many cases did you try to recruit at each time?



I am still not clear
whether or not the stimuli (question stems) were the same.
Did you change the wording on the questions and on the response scale? Please provide a couple of examples of the questions at the 2 times.

Say you did a third wave at time 3, (t3).  At time3  you would  measure both satisfaction and agreement
Call the first wave time 1 (t1), the second wave (t2),
You could then compare (throwing in some more uncertainty because the total package of stimuli would be different)
satisfaction t1 vs t3
agreement t2 vs t3.

You could then correlate agreement and satisfaction at t3.
A good scatterplot and crosstab would visualize for you what the relation between satisfaction and agreement is.

If I read Gerry' post, the approach would be
at t1 predict satisfaction from other variables maybe age, gender, whether the respondent was an employee or was visiting a patient, which meal, etc.
develop an equation.
Apply that equation to the data from t2 using the values for the IVs.
Look at the fit, visualize, residuals, etc.

at t2 predict agreement from other variables maybe age, gender, whether the respondent was an employee or was visiting a patient,  which meal,etc.
develop an equation.
Apply that equation to the data from t1 using the values for the IVs.
Look at the fit, visualize, residuals, etc.

Take a large dose of salt and make some guess about how comparable you think the two measures of performance.

Aside: I have some difficulties from using satisfaction and agreement   as measures of "performance" especially it they are the whole measure.


Art Kendall
Social Research Consultants
On 4/28/2013 11:06 AM, MR [via SPSSX Discussion] wrote:
Art,

Thanks for your response:

1. Respondents are different in both waves
2. We asked satisfaction on food, staff, and speed of service. We asked the same measures in wave 2 but on agreement scale.
3. This is a non-profit work for community hospital restaurant. Unfortunately the decision maker had his own hypothesis on scales. We debated a lot but he still went ahead with scale change. 
4. Yes, scale magnitude were same. 

Re: regression, I am really not getting my head around on regression part, is t1 the wave 1 score of food and t3 score of food in wave 2? What's the dependent variable here? Note that I cannot run repeated measures as it is not the same respondent. 

Thanks,
On 2013-04-28, at 8:46 AM, Art Kendall <<a moz-do-not-send="true" href="x-msg://43/user/SendEmail.jtp?type=node&amp;node=5719795&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email]> wrote:

do you have the same respondents in both waves?  Can you tie responses to individuals?

What did they agree with?

did you have a series of items with the same response scale to create a summative score, or do you have a single item?

You could do a regression as Gerry suggested. 
On later waves you could as for both measures of performance.  You would then have
t1 vs t3 for satisfaction and
t2 vs t3 for agreement

However, I do not think you can conclude at this time that performance dropped.   You can conclude that the way that you measured performance changed.

Who changed the response format?  Were the stems identical, similar?
Art Kendall
Social Research Consultants
On 4/27/2013 8:24 PM, MR [via SPSSX Discussion] wrote:
Team,

I have one problem on my hand and am running out of options on which statistics to use in SPSS. First, I know that the what I want to do is not advisable but trust me, I have fought my battle on this. This is what I want to achieve:

Issue: We did wave 1 survey using 5-point satisfaction scale. The second wave was conducted using 5-point agreement scale. Expectedly, top-box scores from agreement scale when compared to top-box score of satisfaction scale was low by 10% points. For e.g., agreement scale top box in wave 2 came out as 50% while wave 1 it was 60%.

Goal: I have compared the historical data and conclude that score difference is purely due to scale change. However, i want to normalize the wave 2 score so that I can compare with wave 1. I know this is not advisable but I have to do this. I googled but could not find any statistics that helps to normalize the scores - indeed I don't know where to begin. I need a scientific method to normalize the scores so that they are comparable. I don't want to conclude that performance dropped by 10% just because scale changed.

Your wisdom and help is very much required.

Thanks,
Mike

=====================
To manage your subscription to SPSSX-L, send a message to
<a moz-do-not-send="true" href="<a href="x-msg://49/user/SendEmail.jtp?type=node&amp;amp;node=5719790&amp;amp;i=0">x-msg://49/user/SendEmail.jtp?type=node&amp;node=5719790&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD



If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719790.html
To start a new topic under SPSSX Discussion, email <a href="<a href="x-msg://49/user/SendEmail.jtp?type=node&amp;amp;node=5719792&amp;amp;i=0">x-msg://49/user/SendEmail.jtp?type=node&amp;node=5719792&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email]
To unsubscribe from SPSSX Discussion, <a moz-do-not-send="true" href="<a href="x-msg://49/">x-msg://49/" target="_top" rel="nofollow" link="external">click here.
NAML

Art Kendall
Social Research Consultants


View this message in context: Re: Normalizing scores
Sent from the SPSSX Discussion mailing list archive at Nabble.com.




If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719795.html
To start a new topic under SPSSX Discussion, email <a href="x-msg://43/user/SendEmail.jtp?type=node&amp;node=5719796&amp;i=0" target="_top" rel="nofollow" link="external">[hidden email]
To unsubscribe from SPSSX Discussion, <a moz-do-not-send="true" href="x-msg://43/" target="_top" rel="nofollow" link="external">click here.
NAML

Art Kendall
Social Research Consultants


View this message in context: Re: Normalizing scores
Sent from the SPSSX Discussion mailing list archive at Nabble.com.




If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/do-repeat-tp5719707p5719801.html
To start a new topic under SPSSX Discussion, email [hidden email]
To unsubscribe from SPSSX Discussion, click here.
NAML

Art Kendall
Social Research Consultants
12