Login  Register

Re: Input Data

Posted by Rich Ulrich on Apr 24, 2013; 3:33am
URL: http://spssx-discussion.165.s1.nabble.com/Input-Data-tp5719569p5719672.html

Ryan,
You are right about one thing.  Before this reply, the original poster
was being led down this wrong path by replies that were descriptive and
generally neutral.  I think I was attributing a responsibility for pointing out
that it could be difficult or impossible (with these data) to do the usual
IRT modeling.  (For one thing:  the only variance available is between-
facility, rather than between-subject, within-facility.)   And that the only
reason for re-writing data, Casestovars, was in hopes that this form
would help do that modeling.  From David's newer post, he has doubts
about that, too.

Here, you do add some helpful, further description of what can be done.
However, I believe you go beyond what is "scientific" when you say, "far more
sensitively measured."  I've never seen that.

My own description was brief, and focused otherwise.  Despite your
comments, I don't see anything that I should change.

Finally, many of the benefits, including what you point to, are ones that
I see as most useful for the people who are developing items and scales
before the scale is published. 

--
Rich Ulrich

Date: Mon, 22 Apr 2013 20:05:01 -0400
From: [hidden email]
Subject: Re: Input Data
To: [hidden email]

Rich,

Your statement that I would have anyone do anything is uncalled for. Yes, if one wants to develop an interval level measure, the Rasch model has been shown to be useful in that task. Moreover, your explanation of IRT modeling is a gross oversimplification. Calibration of response options go hand-in-hand with calibration of items. Psychometric properties such as reliability are far more sensitively measured using Rasch/IRT modeling. Dimensionality is often confounded by item calibrations by use of EFA/PCA on the raw scores. And the list goes on and on. 

This dismissive attitude is unscientific, at best. 

Ryan 

Sent from my iPhone

On Apr 22, 2013, at 7:49 PM, Rich Ulrich <[hidden email]> wrote:


[re-sent]
Yes, exactly.

To answer an earlier question:  "Yes, that does throw away
information, but (usually) not much."  The Information Response
Theory/ Rasch modeling -- that David and Ryan would have you do --
would result, usually, in the same final step.  Their final step lets
you create the final score similarly, but instead of scoring (1,2,3),
it might score while using (1, 1.9, 3) as the terms to multiply by.
These unequal intervals like .9, 1.1  are derived from the data.

SOME THEORY.
In the 1930s, Likert showed that there is very little impact on analyses
and conclusions when you use the integer values for  near-interval-spaced
True-scores.  IIRC, it was Likert who also showed that when it comes to
constructing total-scores, unequal weights can cost you more in reliability
(by shortening the effective "length" of the scale) than you gain in precision.

These insights served psychometrics very well, with very little challenge,
for the next 60 years.  However, in the 1990s, computer software and
statistical theory had advanced enough so that it became feasible to define
and defend unequal intervals.  The "techniques" also contribute to an aura
of sophistication and precision, even when the sample Ns are too small or the
scales are too short to actually benefit.  Computers score up rating scales
just about as fast and accurately, regardless of the scoring algorithms.

--
Rich Ulrich

> Date: Sat, 20 Apr 2013 04:35:26 -0700

> From: [hidden email]
> Subject: Re: Input Data
> To: [hidden email]
>
> Thank you!
>
> To start, I will make those scores for the type of hospitals! So I have to
> multiply the outputs (%) with the scores en then make the sum of them? So
> that every hospital will have a score at 3?
> Example: 0.08 * 1 + 0.12 * 2 + 0.80 * 3 = 2.72?
>
>
>
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Input-Data-tp5719569p5719591.html