Question Regarding Analysis

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Question Regarding Analysis

bdates

I have been asked to analyze data from a large (n=402) study of an intervention for adolescents. One of the measures, that is quite highly used, is the CAFAS, which assesses functioning in children and youth, age 6 to 18. There are eight items, scored from 0 to 30 in increments of 10 (i.e., 0, 10, 20, 30). The eight items produce a total score ranging from 0 to 240. A review of the literature does not reveal really good reliability, either alpha, or ICC (the measure is clinician-completed). Item-total correlations range from .20 to .57. The distribution of scores on all items is skewed, either positively or negatively. I'm proposing to analyze each of the scales as if they were ordinal and the total score as interval, using a repeated measures approach since there are multiple measures per youth. I neither designed the study nor chose the measures. I've simply been commissioned to analyze the data. Any thoughts on treating the items as ordinal? I know it's conservative, but I have difficulty with a four-point discontinuous scale as really interval in nature.


Brian
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Question Regarding Analysis

Rich Ulrich
Google shows me that this is a scale with a long history of use,
sampling in various populations; and that the 4-point scoring
stands for Severe/ Moderate/ Mild/ Little or no impairment.

On reliability:
It is the wrong idea, to use "internal consistency" as /the/ criterion
for reliability for a composite that covers a variety of areas. You
want rater-rater comparisons for assessing consistency, and some
external comparisons for validity.

On "equal intervals":
If I wanted evidence of the non-interval nature of the scaling, I
would look at the cross-time tabulations, even better than the
cross-rater tabulations.  What I would look for is the frequency
of changes between categories. If, for instance, no one "ever"
moves from 0 to 10 (compared to other changes), then that
suggests that the interval is largest between 0 and 10. But is that
perhaps a function of the high-risk population you are sampling?

Do you have a subjective feeling that the distances are unequal?
It is very, very common for 4-point scales to be analyzed by ANOVA
as if they equal-interval to a suitable degree - For a much-used
scale, apparently others have been satisfied. The question is not
"Are these unequal intervals" but, rather, "Are these intervals
improved enough by some transformation to justify the complication
of computation, and the confusion in presenting results?"

On transformation to rank:
The simple rule of thumb is that, if the means do not provide a
good comparison between groups, then you probably should
transform.  The usual "non-parametric" test gives an analysis,
essentially by ANOVA, of the rank-transformed scores. 

So, for a few skewed variables, compute the rank-transformed
scores. (If 400 scores are /equally/ divided into four groups, your
result effectively matches the "intervals" you started out with,
since the 100 ties in each group give you average ranks of  50.5,
150.5, 250.5, and 350.5 -- Transforming "10" to 50.5, etc., gives
you the same ANOVA F, since you have simply performed the
same linear transformation on each of the scores.)

Does this version of "interval", when applied to the skewed
variables (where it makes a difference) make more sense? In my
experience, I usually haven't preferred the simple, rank scoring.

Advanced scale development goes a step further, and uses a
logistic transformation on the average-rank.  This DOES give
a spacing for which there is some theoretical justification,
and better "normality".  I won't say any more about that, except
that I want a "norming" sample if I'm going to set norms for that
version of scaling -- and , personally, I never did play with scale
development that intensively.

--
Rich Ulrich


From: SPSSX(r) Discussion <[hidden email]> on behalf of Dates, Brian <[hidden email]>
Sent: Tuesday, May 7, 2019 10:33 AM
To: [hidden email]
Subject: Question Regarding Analysis
 

I have been asked to analyze data from a large (n=402) study of an intervention for adolescents. One of the measures, that is quite highly used, is the CAFAS, which assesses functioning in children and youth, age 6 to 18. There are eight items, scored from 0 to 30 in increments of 10 (i.e., 0, 10, 20, 30). The eight items produce a total score ranging from 0 to 240. A review of the literature does not reveal really good reliability, either alpha, or ICC (the measure is clinician-completed). Item-total correlations range from .20 to .57. The distribution of scores on all items is skewed, either positively or negatively. I'm proposing to analyze each of the scales as if they were ordinal and the total score as interval, using a repeated measures approach since there are multiple measures per youth. I neither designed the study nor chose the measures. I've simply been commissioned to analyze the data. Any thoughts on treating the items as ordinal? I know it's conservative, but I have difficulty with a four-point discontinuous scale as really interval in nature.


Brian
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Question Regarding Analysis

bdates

Rich,


Thanks for taking the quite impressive amount of time to answer this. On the reliability side, I've always favored the ICC rather than Chronbach, although most of the author's work has been with Chronbach. I'll take your advice on looking at the interval/ordinal nature of each item/subscale. Yes, my subjective feeling is that the distances are not equal. I may be confounding the difficulty, but each level of severity, 0 to 30, has a number of anchors, which are quite specific. The score on any item is the most severe category indicated by an anchor. However, one can be assigned a 30 with one anchor checked or with three checked. Lower anchors, e.g., at 20 or 10 are not even counted in assigning a score. So in many ways the score is really categorical, but later assigned a numerical score. I really think a youth with three anchors in a category is probably more disturbed than if only one were present. Just my thoughts.


Thanks again for your time and expertise.


Brian

From: Rich Ulrich <[hidden email]>
Sent: Tuesday, May 7, 2019 12:36:27 PM
To: [hidden email]; Dates, Brian
Subject: Re: Question Regarding Analysis
 
Google shows me that this is a scale with a long history of use,
sampling in various populations; and that the 4-point scoring
stands for Severe/ Moderate/ Mild/ Little or no impairment.

On reliability:
It is the wrong idea, to use "internal consistency" as /the/ criterion
for reliability for a composite that covers a variety of areas. You
want rater-rater comparisons for assessing consistency, and some
external comparisons for validity.

On "equal intervals":
If I wanted evidence of the non-interval nature of the scaling, I
would look at the cross-time tabulations, even better than the
cross-rater tabulations.  What I would look for is the frequency
of changes between categories. If, for instance, no one "ever"
moves from 0 to 10 (compared to other changes), then that
suggests that the interval is largest between 0 and 10. But is that
perhaps a function of the high-risk population you are sampling?

Do you have a subjective feeling that the distances are unequal?
It is very, very common for 4-point scales to be analyzed by ANOVA
as if they equal-interval to a suitable degree - For a much-used
scale, apparently others have been satisfied. The question is not
"Are these unequal intervals" but, rather, "Are these intervals
improved enough by some transformation to justify the complication
of computation, and the confusion in presenting results?"

On transformation to rank:
The simple rule of thumb is that, if the means do not provide a
good comparison between groups, then you probably should
transform.  The usual "non-parametric" test gives an analysis,
essentially by ANOVA, of the rank-transformed scores. 

So, for a few skewed variables, compute the rank-transformed
scores. (If 400 scores are /equally/ divided into four groups, your
result effectively matches the "intervals" you started out with,
since the 100 ties in each group give you average ranks of  50.5,
150.5, 250.5, and 350.5 -- Transforming "10" to 50.5, etc., gives
you the same ANOVA F, since you have simply performed the
same linear transformation on each of the scores.)

Does this version of "interval", when applied to the skewed
variables (where it makes a difference) make more sense? In my
experience, I usually haven't preferred the simple, rank scoring.

Advanced scale development goes a step further, and uses a
logistic transformation on the average-rank.  This DOES give
a spacing for which there is some theoretical justification,
and better "normality".  I won't say any more about that, except
that I want a "norming" sample if I'm going to set norms for that
version of scaling -- and , personally, I never did play with scale
development that intensively.

--
Rich Ulrich


From: SPSSX(r) Discussion <[hidden email]> on behalf of Dates, Brian <[hidden email]>
Sent: Tuesday, May 7, 2019 10:33 AM
To: [hidden email]
Subject: Question Regarding Analysis
 

I have been asked to analyze data from a large (n=402) study of an intervention for adolescents. One of the measures, that is quite highly used, is the CAFAS, which assesses functioning in children and youth, age 6 to 18. There are eight items, scored from 0 to 30 in increments of 10 (i.e., 0, 10, 20, 30). The eight items produce a total score ranging from 0 to 240. A review of the literature does not reveal really good reliability, either alpha, or ICC (the measure is clinician-completed). Item-total correlations range from .20 to .57. The distribution of scores on all items is skewed, either positively or negatively. I'm proposing to analyze each of the scales as if they were ordinal and the total score as interval, using a repeated measures approach since there are multiple measures per youth. I neither designed the study nor chose the measures. I've simply been commissioned to analyze the data. Any thoughts on treating the items as ordinal? I know it's conservative, but I have difficulty with a four-point discontinuous scale as really interval in nature.


Brian
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Question Regarding Analysis

Bruce Weaver
Administrator
In reply to this post by bdates
Hi Brian.  Re treating the sum of the eight items as interval, here's an
interesting (and provocative) article I was alerted to just yesterday:

Liddell TM, Kruschke JK. Analyzing ordinal data with metric models: What
could possibly go wrong?. Journal of Experimental Social Psychology. 2018
Nov 1;79:328-48.

  https://osf.io/9h3et/download?format=pdf

If you have institutional access to sciencedirect.com, you can get the final
published article here:

  https://www.sciencedirect.com/science/article/pii/S0022103117307746

From p. 344 in the final published article:

"Some authors have argued that, despite the ordinal character of individual
Likert items, averaged ordinal items can have an emergent property of an
interval scale and so it is appropriate to apply metric methods to the
averaged values (e.g., Carifio and Perla, 2007, Carifio and Perla, 2008. It
is intuitively plausible that the averaging could produce data that at least
look more continuous and therefore may not suffer from the problems pointed
out above. Unfortunately that intuition is wrong. We show in this section
that an average of ordinal items has the same problems as a single item."

HTH.


bdates wrote

> I have been asked to analyze data from a large (n=402) study of an
> intervention for adolescents. One of the measures, that is quite highly
> used, is the CAFAS, which assesses functioning in children and youth, age
> 6 to 18. There are eight items, scored from 0 to 30 in increments of 10
> (i.e., 0, 10, 20, 30). The eight items produce a total score ranging from
> 0 to 240. A review of the literature does not reveal really good
> reliability, either alpha, or ICC (the measure is clinician-completed).
> Item-total correlations range from .20 to .57. The distribution of scores
> on all items is skewed, either positively or negatively. I'm proposing to
> analyze each of the scales as if they were ordinal and the total score as
> interval, using a repeated measures approach since there are multiple
> measures per youth. I neither designed the study nor chose the measures.
> I've simply been commissioned to analyze the data. Any thoughts on
> treating the items as ordinal? I know it's conservative, but I have
> difficulty with a four-point discontinuous scale as really interval in
> nature.
>
>
> Brian
>
> =====================
> To manage your subscription to SPSSX-L, send a message to

> LISTSERV@.UGA

>  (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD





-----
--
Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

NOTE: My Hotmail account is not monitored regularly.
To send me an e-mail, please use the address shown above.

--
Sent from: http://spssx-discussion.1045642.n5.nabble.com/

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Question Regarding Analysis

bdates

Wow! Talk about timing!  Thanks, Bruce. I'll take a look. I would have thought for sure that the summing would have created an interval variable.


Brian

From: SPSSX(r) Discussion <[hidden email]> on behalf of Bruce Weaver <[hidden email]>
Sent: Tuesday, May 7, 2019 4:07:01 PM
To: [hidden email]
Subject: Re: Question Regarding Analysis
 
Hi Brian.  Re treating the sum of the eight items as interval, here's an
interesting (and provocative) article I was alerted to just yesterday:

Liddell TM, Kruschke JK. Analyzing ordinal data with metric models: What
could possibly go wrong?. Journal of Experimental Social Psychology. 2018
Nov 1;79:328-48.

  https://osf.io/9h3et/download?format=pdf

If you have institutional access to sciencedirect.com, you can get the final
published article here:

  https://www.sciencedirect.com/science/article/pii/S0022103117307746

From p. 344 in the final published article:

"Some authors have argued that, despite the ordinal character of individual
Likert items, averaged ordinal items can have an emergent property of an
interval scale and so it is appropriate to apply metric methods to the
averaged values (e.g., Carifio and Perla, 2007, Carifio and Perla, 2008. It
is intuitively plausible that the averaging could produce data that at least
look more continuous and therefore may not suffer from the problems pointed
out above. Unfortunately that intuition is wrong. We show in this section
that an average of ordinal items has the same problems as a single item."

HTH.


bdates wrote
> I have been asked to analyze data from a large (n=402) study of an
> intervention for adolescents. One of the measures, that is quite highly
> used, is the CAFAS, which assesses functioning in children and youth, age
> 6 to 18. There are eight items, scored from 0 to 30 in increments of 10
> (i.e., 0, 10, 20, 30). The eight items produce a total score ranging from
> 0 to 240. A review of the literature does not reveal really good
> reliability, either alpha, or ICC (the measure is clinician-completed).
> Item-total correlations range from .20 to .57. The distribution of scores
> on all items is skewed, either positively or negatively. I'm proposing to
> analyze each of the scales as if they were ordinal and the total score as
> interval, using a repeated measures approach since there are multiple
> measures per youth. I neither designed the study nor chose the measures.
> I've simply been commissioned to analyze the data. Any thoughts on
> treating the items as ordinal? I know it's conservative, but I have
> difficulty with a four-point discontinuous scale as really interval in
> nature.
>
>
> Brian
>
> =====================
> To manage your subscription to SPSSX-L, send a message to

> LISTSERV@.UGA

>  (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD





-----
--
Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

NOTE: My Hotmail account is not monitored regularly.
To send me an e-mail, please use the address shown above.

--
Sent from: http://spssx-discussion.1045642.n5.nabble.com/

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Question Regarding Analysis

David Marso
Administrator
In reply to this post by Bruce Weaver
 IRT might be a fruitful consideration?


Bruce Weaver wrote

> Hi Brian.  Re treating the sum of the eight items as interval, here's an
> interesting (and provocative) article I was alerted to just yesterday:
>
> Liddell TM, Kruschke JK. Analyzing ordinal data with metric models: What
> could possibly go wrong?. Journal of Experimental Social Psychology. 2018
> Nov 1;79:328-48.
>
>   https://osf.io/9h3et/download?format=pdf
>
> If you have institutional access to sciencedirect.com, you can get the
> final
> published article here:
>
>   https://www.sciencedirect.com/science/article/pii/S0022103117307746
>
> From p. 344 in the final published article:
>
> "Some authors have argued that, despite the ordinal character of
> individual
> Likert items, averaged ordinal items can have an emergent property of an
> interval scale and so it is appropriate to apply metric methods to the
> averaged values (e.g., Carifio and Perla, 2007, Carifio and Perla, 2008.
> It
> is intuitively plausible that the averaging could produce data that at
> least
> look more continuous and therefore may not suffer from the problems
> pointed
> out above. Unfortunately that intuition is wrong. We show in this section
> that an average of ordinal items has the same problems as a single item."
>
> HTH.
>
>
> bdates wrote
>> I have been asked to analyze data from a large (n=402) study of an
>> intervention for adolescents. One of the measures, that is quite highly
>> used, is the CAFAS, which assesses functioning in children and youth, age
>> 6 to 18. There are eight items, scored from 0 to 30 in increments of 10
>> (i.e., 0, 10, 20, 30). The eight items produce a total score ranging from
>> 0 to 240. A review of the literature does not reveal really good
>> reliability, either alpha, or ICC (the measure is clinician-completed).
>> Item-total correlations range from .20 to .57. The distribution of scores
>> on all items is skewed, either positively or negatively. I'm proposing to
>> analyze each of the scales as if they were ordinal and the total score as
>> interval, using a repeated measures approach since there are multiple
>> measures per youth. I neither designed the study nor chose the measures.
>> I've simply been commissioned to analyze the data. Any thoughts on
>> treating the items as ordinal? I know it's conservative, but I have
>> difficulty with a four-point discontinuous scale as really interval in
>> nature.
>>
>>
>> Brian
>>
>> =====================
>> To manage your subscription to SPSSX-L, send a message to
>
>> LISTSERV@.UGA
>
>>  (not to SPSSX-L), with no body text except the
>> command. To leave the list, send the command
>> SIGNOFF SPSSX-L
>> For a list of commands to manage subscriptions, send the command
>> INFO REFCARD
>
>
>
>
>
> -----
> --
> Bruce Weaver

> bweaver@

> http://sites.google.com/a/lakeheadu.ca/bweaver/
>
> "When all else fails, RTFM."
>
> NOTE: My Hotmail account is not monitored regularly.
> To send me an e-mail, please use the address shown above.
>
> --
> Sent from: http://spssx-discussion.1045642.n5.nabble.com/
>
> =====================
> To manage your subscription to SPSSX-L, send a message to

> LISTSERV@.UGA

>  (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD





-----
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
--
Sent from: http://spssx-discussion.1045642.n5.nabble.com/

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
Reply | Threaded
Open this post in threaded view
|

Re: Question Regarding Analysis

Bruce Weaver
Administrator
In reply to this post by bdates
Yep.  I think that's what most of us have been taught.  I would have thought
the same before reading the Liddell & Kruschke article.  Time for a
re-think.  

I forgot to mention this.  Liddell & Kruschke used ordinal probit models in
their analyses, but acknowledged near the end of the article that one could
use other methods such as ordinal logit (i.e., ordinal logistic regression).
That would be my preference, as I like the odds ratio.  

Cheers,
Bruce


bdates wrote
> Wow! Talk about timing!  Thanks, Bruce. I'll take a look. I would have
> thought for sure that the summing would have created an interval variable.
>
>
> Brian





-----
--
Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

NOTE: My Hotmail account is not monitored regularly.
To send me an e-mail, please use the address shown above.

--
Sent from: http://spssx-discussion.1045642.n5.nabble.com/

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Question Regarding Analysis

bdates

Ordinal logistic regression would be my choice as well. Several years ago, because I have a large dataset of CAFAS data, I did a Rasch analysis, and found the scales to be ordinal, not interval, the thresholds being quite irregular.


Brian

From: SPSSX(r) Discussion <[hidden email]> on behalf of Bruce Weaver <[hidden email]>
Sent: Tuesday, May 7, 2019 4:16:16 PM
To: [hidden email]
Subject: Re: Question Regarding Analysis
 
Yep.  I think that's what most of us have been taught.  I would have thought
the same before reading the Liddell & Kruschke article.  Time for a
re-think. 

I forgot to mention this.  Liddell & Kruschke used ordinal probit models in
their analyses, but acknowledged near the end of the article that one could
use other methods such as ordinal logit (i.e., ordinal logistic regression).
That would be my preference, as I like the odds ratio. 

Cheers,
Bruce


bdates wrote
> Wow! Talk about timing!  Thanks, Bruce. I'll take a look. I would have
> thought for sure that the summing would have created an interval variable.
>
>
> Brian





-----
--
Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

NOTE: My Hotmail account is not monitored regularly.
To send me an e-mail, please use the address shown above.

--
Sent from: http://spssx-discussion.1045642.n5.nabble.com/

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD