how many cases
did you end up with at each time.
How many cases did you try to recruit at
each time?
I am still not clear whether or not the stimuli
(question stems) were the same.
Did you change the wording on the questions and on the
response scale? Please provide a couple of examples of the
questions at the 2 times.
Say you did a third wave at time 3, (t3). At time3 you
would measure both satisfaction and agreement
Call the first wave time 1 (t1), the second wave (t2),
You could then compare (throwing in some more uncertainty
because the total package of stimuli would be different)
satisfaction t1 vs t3
agreement t2 vs t3.
You could then correlate agreement and satisfaction at t3.
A good scatterplot and crosstab would visualize for you
what the relation between satisfaction and agreement is.
If I read Gerry' post, the approach would be
at t1 predict satisfaction from other variables maybe age,
gender, whether the respondent was an employee or was
visiting a patient, which meal, etc.
develop an equation.
Apply that equation to the data from t2 using the values
for the IVs.
Look at the fit, visualize, residuals, etc.
at t2 predict agreement from other variables maybe age,
gender, whether the respondent was an employee or was
visiting a patient, which meal,etc.
develop an equation.
Apply that equation to the data from t1 using the values
for the IVs.
Look at the fit, visualize, residuals, etc.
Take a large dose of salt and make some guess about how
comparable you think the two measures of performance.
Aside: I have some difficulties from using satisfaction
and agreement as measures of "performance" especially it
they are the whole measure.
Art Kendall
Social Research Consultants
On 4/28/2013 11:06 AM, MR [via SPSSX Discussion] wrote:
Art,
Thanks for your response:
1. Respondents are different in both waves
2. We asked satisfaction on food, staff, and speed of
service. We asked the same measures in wave 2 but on
agreement scale.
3. This is a non-profit work for community hospital
restaurant. Unfortunately the decision maker had his own
hypothesis on scales. We debated a lot but he still went
ahead with scale change.
4. Yes, scale magnitude were same.
Re: regression, I am really not getting my head
around on regression part, is t1 the wave 1 score of
food and t3 score of food in wave 2? What's the
dependent variable here? Note that I cannot run repeated
measures as it is not the same respondent.
Thanks,
On 2013-04-28, at 8:46 AM, Art Kendall <<a
moz-do-not-send="true"
href="x-msg://43/user/SendEmail.jtp?type=node&node=5719795&i=0"
target="_top" rel="nofollow"
link="external">[hidden email]> wrote:
do you
have the same respondents in both waves? Can you tie
responses to individuals?
What did they agree with?
did you have a series of items with the same
response scale to create a summative score, or do
you have a single item?
You could do a regression as Gerry suggested.
On later waves you could as for both measures of
performance. You would then have
t1 vs t3 for satisfaction and
t2 vs t3 for agreement
However, I do not think you can conclude at this
time that performance dropped. You can conclude
that the way that you measured performance
changed.
Who changed the response format? Were the stems
identical, similar?
Art Kendall
Social Research Consultants
On 4/27/2013 8:24 PM, MR [via SPSSX Discussion]
wrote:
Team,
I have one problem on my hand and am running out
of options on which statistics to use in SPSS.
First, I know that the what I want to do is not
advisable but trust me, I have fought my battle on
this. This is what I want to achieve:
Issue: We did wave 1 survey using 5-point
satisfaction scale. The second wave was conducted
using 5-point agreement scale. Expectedly, top-box
scores from agreement scale when compared to
top-box score of satisfaction scale was low by 10%
points. For e.g., agreement scale top box in wave
2 came out as 50% while wave 1 it was 60%.
Goal: I have compared the historical data and
conclude that score difference is purely due to
scale change. However, i want to normalize the
wave 2 score so that I can compare with wave 1. I
know this is not advisable but I have to do this.
I googled but could not find any statistics that
helps to normalize the scores - indeed I don't
know where to begin. I need a scientific method to
normalize the scores so that they are comparable.
I don't want to conclude that performance dropped
by 10% just because scale changed.
Your wisdom and help is very much required.
Thanks,
Mike
=====================
To manage your subscription to SPSSX-L, send a
message to
<a moz-do-not-send="true" href="<a
href="x-msg://49/user/SendEmail.jtp?type=node&amp;node=5719790&amp;i=0">x-msg://49/user/SendEmail.jtp?type=node&node=5719790&i=0"
target="_top" rel="nofollow"
link="external">[hidden email] (not to
SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions,
send the command
INFO REFCARD
To start a new topic under SPSSX Discussion,
email <a href="<a
href="x-msg://49/user/SendEmail.jtp?type=node&amp;node=5719792&amp;i=0">x-msg://49/user/SendEmail.jtp?type=node&node=5719792&i=0"
target="_top" rel="nofollow"
link="external">[hidden email]
To unsubscribe from SPSSX Discussion, <a
moz-do-not-send="true" href="<a
href="x-msg://49/">x-msg://49/" target="_top"
rel="nofollow" link="external">click here.
NAML
Art Kendall
Social Research Consultants
View this message in
context: Re:
Normalizing scores
Sent from the SPSSX
Discussion mailing list archive at Nabble.com.
To start a new topic under SPSSX Discussion, email <a
href="x-msg://43/user/SendEmail.jtp?type=node&node=5719796&i=0"
target="_top" rel="nofollow" link="external">[hidden
email]
To unsubscribe from SPSSX Discussion, <a
moz-do-not-send="true" href="x-msg://43/" target="_top"
rel="nofollow" link="external">click here.
NAML