|
I am analyzing data from eight sites which consists of a score on a
Likert-type scale of program fidelity based upon a review of 10 client charts per site. Is there a valid way of establishing a cutoff score for distinguishing good/poor based on only these eight sites? Thanks in advance for any input. Brian Brian G. Dates Director of Evaluation and Research Southwest Counseling Solutions 1700 Waterman Detroit, MI 48209 313-841-7442 [hidden email] Leading the Way in Building a Healthy Community ====================To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
At 07:56 AM 12/1/2008, Dates, Brian wrote:
>I am analyzing data from eight sites which consists of a score on a >Likert-type scale of program fidelity based upon a review of 10 client >charts per site. Is there a valid way of establishing a cutoff score >for distinguishing good/poor based on only these eight sites? Thanks in >advance for any input. Brian, Just to clarify, when you wrote "distinguishing good/poor" did you mean collapsing the Likert-type scales into a simple dichotomy for the sake of enhancing the counts per cell? Or maybe I should ask what exactly do you mean by "score"? In other words, perhaps it might help to explain the structure of your data a little more. Bob Schacht > > >Brian > > > >Brian G. Dates > >Director of Evaluation and Research > >Southwest Counseling Solutions > >1700 Waterman > >Detroit, MI 48209 > >313-841-7442 > >[hidden email] > > > >Leading the Way in Building a Healthy Community > > > > >To manage your subscription to SPSSX-L, send a message to >[hidden email] (not to SPSSX-L), with no body text except the >command. To leave the list, send the command >SIGNOFF SPSSX-L >For a list of commands to manage subscriptions, send the command >INFO REFCARD Robert M. Schacht, Ph.D. <[hidden email]> Pacific Basin Rehabilitation Research & Training Center 1268 Young Street, Suite #204 Research Center, University of Hawaii Honolulu, HI 96814 ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
Bob,
Thanks for the reply. HTH to clarify. There are eight sites in which a program is being operated. Each of the eight sites has conducted a fidelity assessment to determine how close the program at that site is to the program model. A fidelity assessment is conducted by examining 10 patient charts at each site for information, then using the chart information to determin a score on each of 15 items on the fidelity assessment. Each item is given a score of 0-5. The total fidelity score is the mean of the 15 items, which can range from 0-5. The 10 patient charts are not rated on a scale. They are simply examined for the purpose of determining scores on each of the 15 items, so they really don't constitute a source of data. The data for one site might look like this (using just 5 items to save space): Item Score 1 4 2 3 3 4 4 5 5 3 Average 3.80 So, I have eight averages from the eight different sites. My client would like to have a cutoff score based on the eight sites, which would help determine which sites are doing well and which are not. I've calculated cutoff scores in the past, but always with a large number of respondents. I'm not sure that this number of sites even justifies setting a cutoff score, but wanted to double check before writing the report. Thanks. Brian -----Original Message----- From: Bob Schacht [mailto:[hidden email]] Sent: Monday, December 01, 2008 2:43 PM To: Dates, Brian; [hidden email] Subject: [NEWSENDER] - Re: Cutoff Scores for Small N - Message is from an unknown sender At 07:56 AM 12/1/2008, Dates, Brian wrote: >I am analyzing data from eight sites which consists of a score on a >Likert-type scale of program fidelity based upon a review of 10 client >charts per site. Is there a valid way of establishing a cutoff score >for distinguishing good/poor based on only these eight sites? Thanks in >advance for any input. Brian, Just to clarify, when you wrote "distinguishing good/poor" did you mean collapsing the Likert-type scales into a simple dichotomy for the sake of enhancing the counts per cell? Or maybe I should ask what exactly do you mean by "score"? In other words, perhaps it might help to explain the structure of your data a little more. Bob Schacht > > >Brian > > > >Brian G. Dates > >Director of Evaluation and Research > >Southwest Counseling Solutions > >1700 Waterman > >Detroit, MI 48209 > >313-841-7442 > >[hidden email] > > > >Leading the Way in Building a Healthy Community > > > > >To manage your subscription to SPSSX-L, send a message to >[hidden email] (not to SPSSX-L), with no body text except >command. To leave the list, send the command >SIGNOFF SPSSX-L >For a list of commands to manage subscriptions, send the command >INFO REFCARD Robert M. Schacht, Ph.D. <[hidden email]> Pacific Basin Rehabilitation Research & Training Center 1268 Young Street, Suite #204 Research Center, University of Hawaii Honolulu, HI 96814 ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
At 10:05 AM 12/1/2008, Dates, Brian wrote:
>Bob, > >Thanks for the reply. HTH to clarify. There are eight sites in which a >program is being operated. Each of the eight sites has conducted a >fidelity assessment to determine how close the program at that site is >to the program model. A fidelity assessment is conducted by examining >10 patient charts at each site for information, then using the chart >information to determin a score on each of 15 items on the fidelity >assessment. Each item is given a score of 0-5. The total fidelity score >is the mean of the 15 items, which can range from 0-5. The 10 patient >charts are not rated on a scale. They are simply examined for the >purpose of determining scores on each of the 15 items, so they really >don't constitute a source of data. Brian, Thanks for these details. My knee-jerk reaction is that you're throwing away a lot of data in a sparse data environment. I would argue that the patient charts DO constitute a source of data, and that what you're thinking of as "data" is a summary statistic. It looks to me like you've got 80 patient charts each rated on 15 assessment items, plus two ID variables (site ID, and patient ID), plus your summary statistic. Each of the items is rated on the same kind of assessment scale, from poor to good (or vice versa), aligned in the same direction (e.g., low score always means poor, high score always means good). >The data for one site might look >like this (using just 5 items to save space): > >Item Score >1 4 >2 3 >3 4 >4 5 >5 3 >Average 3.80 > >So, I have eight averages from the eight different sites. My client >would like to have a cutoff score based on the eight sites, which would >help determine which sites are doing well and which are not. An analysis of variance done directly on your original data might reveal if there is any difference among the sites. A number of other measures might be useful. If your sites all cluster around a mean score, a cutoff score might be quite arbitrary and possibly not meaningful (e.g., merely a random deviation.) >I've calculated cutoff scores in the past, but always with a large number of >respondents. I'm not sure that this number of sites even justifies >setting a cutoff score, but wanted to double check before writing the >report. I would be worried about the use of a cutoff score at all, unless your average score for the 8 sites is (a) bimodal, or (b) has outliers on the "poor" end. But like I say, this is an off-the-cuff response, and I am not an expert in such matters. And your employers may not be interested in other statistical measures. Good luck Bob Schacht >Thanks. > >Brian > > >-----Original Message----- >From: Bob Schacht [mailto:[hidden email]] >Sent: Monday, December 01, 2008 2:43 PM >To: Dates, Brian; [hidden email] >Subject: [NEWSENDER] - Re: Cutoff Scores for Small N - Message is from >an unknown sender > >At 07:56 AM 12/1/2008, Dates, Brian wrote: > >I am analyzing data from eight sites which consists of a score on a > >Likert-type scale of program fidelity based upon a review of 10 client > >charts per site. Is there a valid way of establishing a cutoff score > >for distinguishing good/poor based on only these eight sites? Thanks >in > >advance for any input. > >Brian, >Just to clarify, when you wrote "distinguishing good/poor" did you mean >collapsing the Likert-type scales into a simple dichotomy for the sake >of >enhancing the counts per cell? Or maybe I should ask what exactly do >you >mean by "score"? In other words, perhaps it might help to explain the >structure of your data a little more. > >Bob Schacht > > > > > > > > >Brian > > > > > > > >Brian G. Dates > > > >Director of Evaluation and Research > > > >Southwest Counseling Solutions > > > >1700 Waterman > > > >Detroit, MI 48209 > > > >313-841-7442 > > > >[hidden email] > > > > > > > >Leading the Way in Building a Healthy Community > > > > > > > > > >To manage your subscription to SPSSX-L, send a message to > >[hidden email] (not to SPSSX-L), with no body text except >the > >command. To leave the list, send the command > >SIGNOFF SPSSX-L > >For a list of commands to manage subscriptions, send the command > >INFO REFCARD > >Robert M. Schacht, Ph.D. <[hidden email]> >Pacific Basin Rehabilitation Research & Training Center >1268 Young Street, Suite #204 >Research Center, University of Hawaii >Honolulu, HI 96814 > >===================== >To manage your subscription to SPSSX-L, send a message to >[hidden email] (not to SPSSX-L), with no body text except the >command. To leave the list, send the command >SIGNOFF SPSSX-L >For a list of commands to manage subscriptions, send the command >INFO REFCARD Robert M. Schacht, Ph.D. <[hidden email]> Pacific Basin Rehabilitation Research & Training Center 1268 Young Street, Suite #204 Research Center, University of Hawaii Honolulu, HI 96814 ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
| Free forum by Nabble | Edit this page |
