Statistical Question: Sample size for proportion

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

Statistical Question: Sample size for proportion

Marcioestat
Hi listers,

I am trying to determine the sample size according to two proportions of
success (P1 and P2)... I am checking the book of Fleiss (Statistical
methods for rates and proportions) and it is given a formula to find the
size of n according to the proportions and values of alfa and beta...
My question is concerned to the critic value of C1-beta, because for the
alfa/2 I have 1.96 for 5% and which is the value of C1-beta for 1-beta
equals to 80%...

Thanks in advance,

Ribeiro

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Statistical Question: Sample size for proportion

Marta Garcia-Granero
Hi Ribeiro
> I am trying to determine the sample size according to two proportions of
> success (P1 and P2)... I am checking the book of Fleiss (Statistical
> methods for rates and proportions) and it is given a formula to find the
> size of n according to the proportions and values of alfa and beta...
> My question is concerned to the critic value of C1-beta, because for the
> alfa/2 I have 1.96 for 5% and which is the value of C1-beta for 1-beta
> equals to 80%...
>
ZBeta = -0.842

* This code computes Zbeta for the 3 most used beta values *.
SET LOCALE=ENGLISH.
DATA LIST LIST/Beta(F8.2).
BEGIN DATA
0.2
0.1
0.05
END DATA.

COMPUTE Zbeta = IDF.NORMAL(Beta,0,1) .
FORMAT Zbeta(F8.3).
LIST.

Regards,
Marta García-Granero

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

direction of scoring for MANOVA

Zdaniuk, Bozena-2
Hello, I am running multivariate anova with 5 outcomes compared across 3 groups. My outcomes are scored in opposite direction conceptually, e.g., higher score on depression means worse mental health whereas higher score on life satisfaction means better mental health.
I am assuming that this should not matter for the overall F. Am I correct? That is, even though I predict that group1 will score lower on depression and higher on life satisfaction than group 2 than group 3, the overall F will still be correct in telling me whether there is an overall group difference on the set of variables, right?
Or should I score all variables in the same direction to get the correct overall F?
Bozena

Bozena Zdaniuk, Ph.D.
University of Pittsburgh
UCSUR, 6th Fl.
121 University Place
Pittsburgh, PA 15260
Ph.: 412-624-5736
Fax: 412-624-4810
Email: [hidden email]


-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Marta García-Granero
Sent: Monday, March 31, 2008 2:39 AM
To: [hidden email]
Subject: Re: Statistical Question: Sample size for proportion

Hi Ribeiro
> I am trying to determine the sample size according to two proportions of
> success (P1 and P2)... I am checking the book of Fleiss (Statistical
> methods for rates and proportions) and it is given a formula to find the
> size of n according to the proportions and values of alfa and beta...
> My question is concerned to the critic value of C1-beta, because for the
> alfa/2 I have 1.96 for 5% and which is the value of C1-beta for 1-beta
> equals to 80%...
>
ZBeta = -0.842

* This code computes Zbeta for the 3 most used beta values *.
SET LOCALE=ENGLISH.
DATA LIST LIST/Beta(F8.2).
BEGIN DATA
0.2
0.1
0.05
END DATA.

COMPUTE Zbeta = IDF.NORMAL(Beta,0,1) .
FORMAT Zbeta(F8.3).
LIST.

Regards,
Marta García-Granero

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Scoring of a modified form

Hashmi, Syed S
Hi all,

I have a questionnaire where the answer options are in the form of a
Likert scale (Definitely true, maybe true, unsure, maybe false,
definitely false). The replies are scored 1 through 5.

One of the researchers is modifying the form so that instead of 5
options, we'll end up with just 3 (Definitely true, unsure, definitely
false).

My questions is: What should the scoring scale of this new 3-option
answer be?  I was thinking that it should be 1 to 3.  Additionally, it
should be kept in mind that the average score from the modified form
would not be directly comparable to the average score from the original
form that might have been published by other authors.

However, two other options were suggested:
1.  Scoring the new options on a 1-3-5 scale
2.  Scoring the new options on a 1.5-3-4.5 scale

Both these scales would be based on assumptions on what answer would
likely be provided for the missing options (eg. would individuals who
would have answered "maybe true" in the original form be more likely to
answer "definitely true" or "unsure" in the modified form?).  However, I
do not feel comfortable with these scales and IMHO believe that the
results would still not be comparable to any previously published data
that used the original 5-scale form.

So what is the consensus out there? Is this re-scaling normally done? If
so, what is the best way to go about this? I'm not a social scientist
and have rarely used scales of the kind.  The questions in this form are
inquiring about different characteristics about individuals (I can't
give actual queries but they're along the lines of "Your belief system
helps you cope with stress", or, "You would return money that didn't
belong to you").

Thanks in advance.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Scoring of a modified form

Mark.W.Andrews-2
Hi,

Wow, two scale questions in a day.

If you are trying to compare 3-point scale to 5-point scales, I would
say don't waste much effort. There is no way to come up with a solution
that will be truly satisfactory. You need to be cognizant that you are
using scale data and that each scale is open to the interpretation of
quirky compulsive human being that will respond differently to visual
cues and verbiage. Any changes in these cues and you might not get
apples and oranges...but at best tangerines and oranges.

Mark




-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Hashmi, Syed S
Sent: Tuesday, April 01, 2008 3:24 PM
To: [hidden email]
Subject: Scoring of a modified form

Hi all,

I have a questionnaire where the answer options are in the form of a
Likert scale (Definitely true, maybe true, unsure, maybe false,
definitely false). The replies are scored 1 through 5.

One of the researchers is modifying the form so that instead of 5
options, we'll end up with just 3 (Definitely true, unsure, definitely
false).

My questions is: What should the scoring scale of this new 3-option
answer be?  I was thinking that it should be 1 to 3.  Additionally, it
should be kept in mind that the average score from the modified form
would not be directly comparable to the average score from the original
form that might have been published by other authors.

However, two other options were suggested:
1.  Scoring the new options on a 1-3-5 scale
2.  Scoring the new options on a 1.5-3-4.5 scale

Both these scales would be based on assumptions on what answer would
likely be provided for the missing options (eg. would individuals who
would have answered "maybe true" in the original form be more likely to
answer "definitely true" or "unsure" in the modified form?).  However, I
do not feel comfortable with these scales and IMHO believe that the
results would still not be comparable to any previously published data
that used the original 5-scale form.

So what is the consensus out there? Is this re-scaling normally done? If
so, what is the best way to go about this? I'm not a social scientist
and have rarely used scales of the kind.  The questions in this form are
inquiring about different characteristics about individuals (I can't
give actual queries but they're along the lines of "Your belief system
helps you cope with stress", or, "You would return money that didn't
belong to you").

Thanks in advance.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Scoring of a modified form

Hashmi, Syed S
Thanks Mark.

That's the feeling I had, that the new scale should just be coded 1-3
and the numbers interpreted as they fall.  I just wanted to make sure
that there wasn't something that I was missing.

As for the multiple scale questions in one day.... well, it is April
1st!!! :)

- Shahrukh


> -----Original Message-----
> From: Mark.W.Andrews [mailto:[hidden email]]
> Sent: Tuesday, April 01, 2008 4:11 PM
> To: Hashmi, Syed S; [hidden email]
> Subject: RE: Scoring of a modified form
>
> Hi,
>
> Wow, two scale questions in a day.
>
> If you are trying to compare 3-point scale to 5-point scales, I would
> say don't waste much effort. There is no way to come up with a
solution

> that will be truly satisfactory. You need to be cognizant that you are
> using scale data and that each scale is open to the interpretation of
> quirky compulsive human being that will respond differently to visual
> cues and verbiage. Any changes in these cues and you might not get
> apples and oranges...but at best tangerines and oranges.
>
> Mark
>
>
>
>
> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf
> Of
> Hashmi, Syed S
> Sent: Tuesday, April 01, 2008 3:24 PM
> To: [hidden email]
> Subject: Scoring of a modified form
>
> Hi all,
>
> I have a questionnaire where the answer options are in the form of a
> Likert scale (Definitely true, maybe true, unsure, maybe false,
> definitely false). The replies are scored 1 through 5.
>
> One of the researchers is modifying the form so that instead of 5
> options, we'll end up with just 3 (Definitely true, unsure, definitely
> false).
>
> My questions is: What should the scoring scale of this new 3-option
> answer be?  I was thinking that it should be 1 to 3.  Additionally, it
> should be kept in mind that the average score from the modified form
> would not be directly comparable to the average score from the
original
> form that might have been published by other authors.
>
> However, two other options were suggested:
> 1.  Scoring the new options on a 1-3-5 scale
> 2.  Scoring the new options on a 1.5-3-4.5 scale
>
> Both these scales would be based on assumptions on what answer would
> likely be provided for the missing options (eg. would individuals who
> would have answered "maybe true" in the original form be more likely
to

> answer "definitely true" or "unsure" in the modified form?).  However,
> I
> do not feel comfortable with these scales and IMHO believe that the
> results would still not be comparable to any previously published data
> that used the original 5-scale form.
>
> So what is the consensus out there? Is this re-scaling normally done?
> If
> so, what is the best way to go about this? I'm not a social scientist
> and have rarely used scales of the kind.  The questions in this form
> are
> inquiring about different characteristics about individuals (I can't
> give actual queries but they're along the lines of "Your belief system
> helps you cope with stress", or, "You would return money that didn't
> belong to you").
>
> Thanks in advance.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except
> the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Scoring of a modified form

Swank, Paul R
In reply to this post by Hashmi, Syed S
Collapsing "maybe true" with "definitely true" and calling the response
"definitely true" seems problematic to me.

Paul R. Swank, Ph.D.
Professor and Director of Research
Children's Learning Institute
University of Texas Health Science Center - Houston


-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Hashmi, Syed S
Sent: Tuesday, April 01, 2008 2:24 PM
To: [hidden email]
Subject: Scoring of a modified form

Hi all,

I have a questionnaire where the answer options are in the form of a
Likert scale (Definitely true, maybe true, unsure, maybe false,
definitely false). The replies are scored 1 through 5.

One of the researchers is modifying the form so that instead of 5
options, we'll end up with just 3 (Definitely true, unsure, definitely
false).

My questions is: What should the scoring scale of this new 3-option
answer be?  I was thinking that it should be 1 to 3.  Additionally, it
should be kept in mind that the average score from the modified form
would not be directly comparable to the average score from the original
form that might have been published by other authors.

However, two other options were suggested:
1.  Scoring the new options on a 1-3-5 scale
2.  Scoring the new options on a 1.5-3-4.5 scale

Both these scales would be based on assumptions on what answer would
likely be provided for the missing options (eg. would individuals who
would have answered "maybe true" in the original form be more likely to
answer "definitely true" or "unsure" in the modified form?).  However, I
do not feel comfortable with these scales and IMHO believe that the
results would still not be comparable to any previously published data
that used the original 5-scale form.

So what is the consensus out there? Is this re-scaling normally done? If
so, what is the best way to go about this? I'm not a social scientist
and have rarely used scales of the kind.  The questions in this form are
inquiring about different characteristics about individuals (I can't
give actual queries but they're along the lines of "Your belief system
helps you cope with stress", or, "You would return money that didn't
belong to you").

Thanks in advance.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Standardizing data

Hashmi, Syed S
In reply to this post by Mark.W.Andrews-2
Hi all,

I have a dataset that includes information from cell cultures that have
some sort of radioactive marker.  During the experiments, cells were
cultured in 6 different mediums containing different ratios of chemicals
A and B (1:1, 1:2, 1:3, etc). The base of the medium (that contains the
radioactive marker) was made at one time, separated into 6 trays, and a
different A:B ratio reagent was added to each of the 6 trays followed by
the addition of the cells.  This whole setup was done 5 times, i.e. on 5
different days.

The study question was whether the different A:B ratios result in a
difference in radioactive uptake (a continuous variable) by the cells.
Comparisons were between the referent 1:1 ratio and each of the other
ratios.  The researcher was going to take the radiation values for each
A:B from all 5 days and average them out to get a mean radiation value
for that particular A:B ratio.  Once all the means and sd were
calculated, he was going to perform a t-test to identify a difference in
mean between the 1:1 medium and each of the other mediums.

However, due to the issues intrinsic in the medium creation and the way
the radioactivity is measured, the values vary quite dramatically from
day to day.  Therefore, the medium created on day 1 might have values in
the 0.5 to 2.0 range, while medium created on day 2 might have values in
the 12.0 to 14.0 range, medium created on day 3 might have values in the
3.0-5.0 range, and so on.

My question is what would be the best way to standardize these values?
Should the 1:1 values be converted to a standard value of 1.0 for each
day and all A:B values for that day standardized to that and compared?
Can we still run a t-test if the referent category ends up with all
values of 1.0?? How can I incorporate the std. deviation into this?  Or
should another test be performed?

Thanks.

- Shahrukh

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Scoring of a modified form

Hashmi, Syed S
In reply to this post by Swank, Paul R
Paul,

It wasn't really a collapsing of the responses.  More so, the possible
response options were limited, from 5 to 3 only.  Additionally, I don't
think the investigator was planning to compare the responses from the
5-scale option questionnaire to the 3-scale option questionnaire.

- Shahrukh



> -----Original Message-----
> From: Swank, Paul R
> Sent: Wednesday, April 02, 2008 3:21 PM
> To: Hashmi, Syed S; [hidden email]
> Subject: RE: Scoring of a modified form
>
> Collapsing "maybe true" with "definitely true" and calling the
response

> "definitely true" seems problematic to me.
>
> Paul R. Swank, Ph.D.
> Professor and Director of Research
> Children's Learning Institute
> University of Texas Health Science Center - Houston
>
>
> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf
> Of Hashmi, Syed S
> Sent: Tuesday, April 01, 2008 2:24 PM
> To: [hidden email]
> Subject: Scoring of a modified form
>
> Hi all,
>
> I have a questionnaire where the answer options are in the form of a
> Likert scale (Definitely true, maybe true, unsure, maybe false,
> definitely false). The replies are scored 1 through 5.
>
> One of the researchers is modifying the form so that instead of 5
> options, we'll end up with just 3 (Definitely true, unsure, definitely
> false).
>
> My questions is: What should the scoring scale of this new 3-option
> answer be?  I was thinking that it should be 1 to 3.  Additionally, it
> should be kept in mind that the average score from the modified form
> would not be directly comparable to the average score from the
original
> form that might have been published by other authors.
>
> However, two other options were suggested:
> 1.  Scoring the new options on a 1-3-5 scale
> 2.  Scoring the new options on a 1.5-3-4.5 scale
>
> Both these scales would be based on assumptions on what answer would
> likely be provided for the missing options (eg. would individuals who
> would have answered "maybe true" in the original form be more likely
to

> answer "definitely true" or "unsure" in the modified form?).  However,
> I
> do not feel comfortable with these scales and IMHO believe that the
> results would still not be comparable to any previously published data
> that used the original 5-scale form.
>
> So what is the consensus out there? Is this re-scaling normally done?
> If
> so, what is the best way to go about this? I'm not a social scientist
> and have rarely used scales of the kind.  The questions in this form
> are
> inquiring about different characteristics about individuals (I can't
> give actual queries but they're along the lines of "Your belief system
> helps you cope with stress", or, "You would return money that didn't
> belong to you").
>
> Thanks in advance.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except
> the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Scoring of a modified form

Linda Bruce
How about something like  "no", "maybe", "yes".  Depending on the way the
questions are worded, you could also use a three-point "agreement" scale -
but I'm not sure what the mid-point would be - "neither agree/disagree",
"not sure"....
_________

Linda B

Address Locator / Indice de l'adresse : 0702E



"Hashmi, Syed S" <[hidden email]>
Sent by: "SPSSX(r) Discussion" <[hidden email]>
2008-04-02 04:35 PM
Please respond to
"Hashmi, Syed S" <[hidden email]>


To
[hidden email]
cc

Subject
Re: Scoring of a modified form






Paul,

It wasn't really a collapsing of the responses.  More so, the possible
response options were limited, from 5 to 3 only.  Additionally, I don't
think the investigator was planning to compare the responses from the
5-scale option questionnaire to the 3-scale option questionnaire.

- Shahrukh



> -----Original Message-----
> From: Swank, Paul R
> Sent: Wednesday, April 02, 2008 3:21 PM
> To: Hashmi, Syed S; [hidden email]
> Subject: RE: Scoring of a modified form
>
> Collapsing "maybe true" with "definitely true" and calling the
response

> "definitely true" seems problematic to me.
>
> Paul R. Swank, Ph.D.
> Professor and Director of Research
> Children's Learning Institute
> University of Texas Health Science Center - Houston
>
>
> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf
> Of Hashmi, Syed S
> Sent: Tuesday, April 01, 2008 2:24 PM
> To: [hidden email]
> Subject: Scoring of a modified form
>
> Hi all,
>
> I have a questionnaire where the answer options are in the form of a
> Likert scale (Definitely true, maybe true, unsure, maybe false,
> definitely false). The replies are scored 1 through 5.
>
> One of the researchers is modifying the form so that instead of 5
> options, we'll end up with just 3 (Definitely true, unsure, definitely
> false).
>
> My questions is: What should the scoring scale of this new 3-option
> answer be?  I was thinking that it should be 1 to 3.  Additionally, it
> should be kept in mind that the average score from the modified form
> would not be directly comparable to the average score from the
original
> form that might have been published by other authors.
>
> However, two other options were suggested:
> 1.  Scoring the new options on a 1-3-5 scale
> 2.  Scoring the new options on a 1.5-3-4.5 scale
>
> Both these scales would be based on assumptions on what answer would
> likely be provided for the missing options (eg. would individuals who
> would have answered "maybe true" in the original form be more likely
to

> answer "definitely true" or "unsure" in the modified form?).  However,
> I
> do not feel comfortable with these scales and IMHO believe that the
> results would still not be comparable to any previously published data
> that used the original 5-scale form.
>
> So what is the consensus out there? Is this re-scaling normally done?
> If
> so, what is the best way to go about this? I'm not a social scientist
> and have rarely used scales of the kind.  The questions in this form
> are
> inquiring about different characteristics about individuals (I can't
> give actual queries but they're along the lines of "Your belief system
> helps you cope with stress", or, "You would return money that didn't
> belong to you").
>
> Thanks in advance.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except
> the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Scoring of a modified form

Swank, Paul R
In reply to this post by Hashmi, Syed S
Thank goodness for that!

Paul R. Swank, Ph.D.
Professor and Director of Research
Children's Learning Institute
University of Texas Health Science Center - Houston


-----Original Message-----
From: Hashmi, Syed S
Sent: Wednesday, April 02, 2008 3:35 PM
To: Swank, Paul R; '[hidden email]'
Subject: RE: Scoring of a modified form

Paul,

It wasn't really a collapsing of the responses.  More so, the possible
response options were limited, from 5 to 3 only.  Additionally, I don't
think the investigator was planning to compare the responses from the
5-scale option questionnaire to the 3-scale option questionnaire.

- Shahrukh



> -----Original Message-----
> From: Swank, Paul R
> Sent: Wednesday, April 02, 2008 3:21 PM
> To: Hashmi, Syed S; [hidden email]
> Subject: RE: Scoring of a modified form
>
> Collapsing "maybe true" with "definitely true" and calling the
response

> "definitely true" seems problematic to me.
>
> Paul R. Swank, Ph.D.
> Professor and Director of Research
> Children's Learning Institute
> University of Texas Health Science Center - Houston
>
>
> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf
> Of Hashmi, Syed S
> Sent: Tuesday, April 01, 2008 2:24 PM
> To: [hidden email]
> Subject: Scoring of a modified form
>
> Hi all,
>
> I have a questionnaire where the answer options are in the form of a
> Likert scale (Definitely true, maybe true, unsure, maybe false,
> definitely false). The replies are scored 1 through 5.
>
> One of the researchers is modifying the form so that instead of 5
> options, we'll end up with just 3 (Definitely true, unsure, definitely
> false).
>
> My questions is: What should the scoring scale of this new 3-option
> answer be?  I was thinking that it should be 1 to 3.  Additionally, it
> should be kept in mind that the average score from the modified form
> would not be directly comparable to the average score from the
original
> form that might have been published by other authors.
>
> However, two other options were suggested:
> 1.  Scoring the new options on a 1-3-5 scale
> 2.  Scoring the new options on a 1.5-3-4.5 scale
>
> Both these scales would be based on assumptions on what answer would
> likely be provided for the missing options (eg. would individuals who
> would have answered "maybe true" in the original form be more likely
to

> answer "definitely true" or "unsure" in the modified form?).  However,
> I
> do not feel comfortable with these scales and IMHO believe that the
> results would still not be comparable to any previously published data
> that used the original 5-scale form.
>
> So what is the consensus out there? Is this re-scaling normally done?
> If
> so, what is the best way to go about this? I'm not a social scientist
> and have rarely used scales of the kind.  The questions in this form
> are
> inquiring about different characteristics about individuals (I can't
> give actual queries but they're along the lines of "Your belief system
> helps you cope with stress", or, "You would return money that didn't
> belong to you").
>
> Thanks in advance.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except
> the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Standardizing data

Maguin, Eugene
In reply to this post by Hashmi, Syed S
Sayed,

So, the experiment can be described as a one between factor design with six
levels and an N of 5. I think it would be better to do an ANOVA with a
follow-up post-hoc test designed for comparisons of A:B ratio means against
the control. One example of this type of posthoc test is the Dunnet. I'd
guess that there are others and they may be better. I don't know about this.

I don't know anything about the procedures for the kind of experiments you
are describing, in particular, why radiation measurements might vary by a
factor of 10 (or more more, maybe). Nor, do I know anything about how cells
uptake radiation. This hinders what I can say. That said, let me ask this.
Suppose you mix a number of batches and divide each batch into the six
treatments. No experimental units, the cells, and no A:B mixture are
inserted at this point. You measure the radiation level in each treatment
for each batch. My impression is that you expect only measurement error
variability in the within batch-between treatment radiation mean but high
variability in the between-batch mean. Now you add the A:B part and the
cells and you observe uptake as a function of the A:B ratio. Question is
does uptake depend on the intial radiation level as well as the A:B ratio?
Or, does the uptake depend only on the A:B ratio?

Sounds like you assume that uptake depends only on A:B. It also sounds like
you are going to standardize across days within treatment. If you do this,
the mean and SD of each treatment will be 0.0 and 1.0, respectively. I think
it'd be better to standardize across treatments within day. If you did this,
each batch (day) mean and SD would be 0.0 and 1.0. Across days within
treatment, variability would be represented.

I'd be interested to hear the comments of others with experience in this
research area.

Gene Maguin

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Standardizing data

Hashmi, Syed S
Thanks Gene,

The project that I had described was not mine and so am slightly ignorant about some issues.  From what I do understand, the radiation uptake is dependent mainly on the A:B ratios.  As for the difference in the readings, it's more of a measurement bias in their measuring instrument.  I was quite surprised to hear of the degree of variability they have in the measurements and personally, I would be take any data from such an instrument with a pinch of salt.

That being said, they have the data, have to present the data and were wondering what to do.  I was thinking about ANCOVA or maybe normalizing data within each A:B to obtain z-scores and running tests on those to compare different A:B strata. However, I like your idea about standardizing values within each day instead and comparing them.

I know this isn't really an SPSS question (plus, they'll probably end up doing some descriptive analysis) but I was hoping to learn some stats in the process :)

- Shahrukh



________________________________

From: SPSSX(r) Discussion on behalf of Gene Maguin
Sent: Thu 4/3/2008 12:27 PM
To: [hidden email]
Subject: Re: Standardizing data



Sayed,

So, the experiment can be described as a one between factor design with six
levels and an N of 5. I think it would be better to do an ANOVA with a
follow-up post-hoc test designed for comparisons of A:B ratio means against
the control. One example of this type of posthoc test is the Dunnet. I'd
guess that there are others and they may be better. I don't know about this.

I don't know anything about the procedures for the kind of experiments you
are describing, in particular, why radiation measurements might vary by a
factor of 10 (or more more, maybe). Nor, do I know anything about how cells
uptake radiation. This hinders what I can say. That said, let me ask this.
Suppose you mix a number of batches and divide each batch into the six
treatments. No experimental units, the cells, and no A:B mixture are
inserted at this point. You measure the radiation level in each treatment
for each batch. My impression is that you expect only measurement error
variability in the within batch-between treatment radiation mean but high
variability in the between-batch mean. Now you add the A:B part and the
cells and you observe uptake as a function of the A:B ratio. Question is
does uptake depend on the intial radiation level as well as the A:B ratio?
Or, does the uptake depend only on the A:B ratio?

Sounds like you assume that uptake depends only on A:B. It also sounds like
you are going to standardize across days within treatment. If you do this,
the mean and SD of each treatment will be 0.0 and 1.0, respectively. I think
it'd be better to standardize across treatments within day. If you did this,
each batch (day) mean and SD would be 0.0 and 1.0. Across days within
treatment, variability would be represented.

I'd be interested to hear the comments of others with experience in this
research area.

Gene Maguin

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Standardizing data

Richard Ristow
In reply to this post by Hashmi, Syed S
At 04:32 PM 4/2/2008, Hashmi, Syed S wrote:

>Cells were cultured in 6 different mediums
>containing different ratios of chemicals A and B
>(1:1, 1:2, 1:3, etc). The base of the medium was
>made at one time, separated into 6 trays, and a
>different A:B ratio reagent was added to each of
>the 6 trays followed by the addition of the
>cells.  This whole setup was done 5 times, i.e. on 5 different days.
>
>The study question was whether the different A:B
>ratios result in a difference in radioactive
>uptake (a continuous variable) by the cells.
>[...] He was going to perform a t-test to
>identify a difference in mean between the 1:1
>medium and each of the other mediums.

To start with, that isn't what he wants to do. As
described, it's a three-level one-way ANOVA, and
should be analyzed as such, once, rather than a
couple of t-tests. Look at ONEWAY. To then decide
which conditions (A:B  ratios) show significant
differences from each other, use the POSTHOC
subcommand. Among the post-hoc tests, I see that
Marta García-Granero recommends SNK or Tukey(*).

>However, due to the issues intrinsic in the
>medium creation, the values vary quite
>dramatically from day to day.  Therefore, the
>medium created on day 1 might have values in the
>0.5 to 2.0 range, while medium created on day 2
>might have values in the 12.0 to 14.0 range,
>medium created on day 3 might have values in the 3.0-5.0 range, and so on.
>
>My question is what would be the best way to standardize these values?

You don't. You now have two factors: base medium
(day), and A:B ratio. Medium day will be a random effect.

Look at UNIANOVA. It may be as simple as

UNIANOVA RadioUpt BY Medium Ratio
     /RANDOM Medium
     /POSTHOC <as for ONEWAY>.

Possibly this (or ONEWAY, or both) can be clicked
up from the menus; I do less of that.

-Good luck,
  Richard
...............................
See the flowchart for choosing analytic
procedures that Marta has just posted:
http://gjyp.nl/marta/Flowchart%20(English).pdf, as noted in her posting

Date:    Fri, 4 Apr 2008 12:35:55 +0200
From:    Marta García-Granero <[hidden email]>
Subject: Flowchart already hosted!
To:      [hidden email]

And, our thanks to Gjalt-Jorn Peters for hosting it!
X-ELNK-Received-Info: spv=0;
X-ELNK-AV: 0
X-ELNK-Info: sbv=0; sbrc=.0; sbf=0b; sbw=000;

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Standardizing data

Marta Garcia-Granero
Hi

I am a bit confused by your description, but I think your data look like
this.

Day Ratio1 Ratio2 Ratio3 Ratio4 Ratio5 Ratio6
1         .          .        .
2
3
4
5

A randomized block design (K dependent samples, quantitative variables...).

If the data layout looks as above, to run UNIANOVA you must first stack
your dataset using VARSTOCASES (ask for help if you need it). The
UNIANOVA syntax should then look like this

UNIANOVA RadioUpt BY Ratio Days
    /RANDOM Days
    /POSTHOC ....
   /SAVE=RESID
     .
     .
 /DESIGN= Ratio Days.

The residuals shoul be checked for normality:

EXAMINE
  VARIABLES=RES_1
  /PLOT BOXPLOT STEMLEAF NPPLOT
  /STATISTICS DESCRIPTIVES.

If your data are very clearly non-normal (skewed or with extreme
outliers), then you should use Friedman test instead. The original data
layout (without stacking it using VARSTOCASES) is the one you will use
to run that procedure. If Friedman test is significant, then use
pairwise sign tests (don't use Wilcoxon, unless your data are fairly
simmetric and you use Quade test for overall significance), followed by
some p-value adjustment method (like Dunn-Sidak or Holm). Again, ask for
code to run that last task if you need it, I'll send it tomorrow (I'm
calling it a day right now and will be turning my computer off after I
hit "Send").

Saying goodbye before hitting Send and shutting down ;)
Marta


>
>
>> Cells were cultured in 6 different mediums
>> containing different ratios of chemicals A and B
>> (1:1, 1:2, 1:3, etc). The base of the medium was
>> made at one time, separated into 6 trays, and a
>> different A:B ratio reagent was added to each of
>> the 6 trays followed by the addition of the
>> cells.  This whole setup was done 5 times, i.e. on 5 different days.
>>
>> The study question was whether the different A:B
>> ratios result in a difference in radioactive
>> uptake (a continuous variable) by the cells.
>> [...] He was going to perform a t-test to
>> identify a difference in mean between the 1:1
>> medium and each of the other mediums.
>
> To start with, that isn't what he wants to do. As
> described, it's a three-level one-way ANOVA, and
> should be analyzed as such, once, rather than a
> couple of t-tests. Look at ONEWAY. To then decide
> which conditions (A:B  ratios) show significant
> differences from each other, use the POSTHOC
> subcommand. Among the post-hoc tests, I see that
> Marta García-Granero recommends SNK or Tukey(*).
>
>> However, due to the issues intrinsic in the
>> medium creation, the values vary quite
>> dramatically from day to day.  Therefore, the
>> medium created on day 1 might have values in the
>> 0.5 to 2.0 range, while medium created on day 2
>> might have values in the 12.0 to 14.0 range,
>> medium created on day 3 might have values in the 3.0-5.0 range, and
>> so on.
>>
>> My question is what would be the best way to standardize these values?
>
> You don't. You now have two factors: base medium
> (day), and A:B ratio. Medium day will be a random effect.
>
> Look at UNIANOVA. It may be as simple as
>
> UNIANOVA RadioUpt BY Medium Ratio
>     /RANDOM Medium
>     /POSTHOC <as for ONEWAY>.
>
> Possibly this (or ONEWAY, or both) can be clicked
> up from the menus; I do less of that.
>
> -Good luck,
>  Richard
> ...............................
> See the flowchart for choosing analytic
> procedures that Marta has just posted:
> http://gjyp.nl/marta/Flowchart%20(English).pdf, as noted in her posting
>
> Date:    Fri, 4 Apr 2008 12:35:55 +0200
> From:    Marta García-Granero <[hidden email]>
> Subject: Flowchart already hosted!
> To:      [hidden email]
>
> And, our thanks to Gjalt-Jorn Peters for hosting it!
> X-ELNK-Received-Info: spv=0;
> X-ELNK-AV: 0
> X-ELNK-Info: sbv=0; sbrc=.0; sbf=0b; sbw=000;
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Standardizing data

Hashmi, Syed S
Thanks Marta and Richard for your comments and suggestions

Marta, the data layout is pretty much how you wrote it though the cols and rows were switched.  As for the analysis, I had also considered ANOVA previously, but the investigator wanted to describe the means of each ratio strata separately and compare each to the referent 1:1 strata, thus obtaining a bunch of different p-values.  Hence my question about standardizing the data from day to day.

They actually have a statistician helping them now (something I'm not!) and I think they are going down some ANOVA path now.  Nonetheless, thanks for all the suggestions from everyone. It was good to get that syntax for UNIANOVA (I've already done CASETOVARS before, thanks to the listserv). Part of the reason I like this listserv is that apart from getting help about problems I'm facing, I always end up learning something new about other stuff that I rarely come across during my own research.  Thanks all.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Standardizing data

Marta Garcia-Granero
Hashmi, Syed S escribió:
> Thanks Marta and Richard for your comments and suggestions
>
> Marta, the data layout is pretty much how you wrote it though the cols and rows were switched.  As for the analysis, I had also considered ANOVA previously, but the investigator wanted to describe the means of each ratio strata separately and compare each to the referent 1:1 strata, thus obtaining a bunch of different p-values.  Hence my question about standardizing the data from day to day.
>
Taking into account the days variation by including it in the UNIANOVA
model as a random factor, somehow "standardizes" your data, provided
that all every ratio was tested. If you want to compare all the ratio
against 1:1 (using it as a control group), then the post-hoc method you
want to use, instead of SNK or Tukey-HSD, is Dunnett.
> They actually have a statistician helping them now (something I'm not!) and I think they are going down some ANOVA path now.  Nonetheless, thanks for all the suggestions from everyone. It was good to get that syntax for UNIANOVA (I've already done CASETOVARS before, thanks to the listserv). Part of the reason I like this listserv is that apart from getting help about problems I'm facing, I always end up learning something new about other stuff that I rarely come across during my own research.

HTH,
Marta

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD