Input Data

classic Classic list List threaded Threaded
36 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Input Data

David Marso
Administrator
Long ago I studied a bunch of articles which drew parallels between Rasch models and unidimensionality within a factor analytic context.  Since Optimal Scaling/Non Linear Principal components/Correspondence analysis strives for the same sort of ordination, it would be interesting to examine the parallels from both a mathematical perspective as well as the actual numerical results.  I studied under both Ben Wright and R.D. Bock back in the good old days at Univ Chicago and got some interesting perspectives on the various IRT camps (let's just say Ben and Darrell didn't exactly see eye to eye).
---
R B wrote
David,
You raise a critically important point, which I was not going to address.
However, since you brought it up, I will, at least very briefly discuss
this issue.
This is where latent trait theory (a.k.a. item response theory) is
particularly helpful. If there is a latent variable (construct), then use
of IRT modeling, particularly Rasch modeling (e.g., via an
adjacent-category logistic regression model), could be used to evaluate the
extent to which response options for each item are ordered, the trait level
at which there is an equal probability of endorsing adjacent categories
(andrich thresholds), and the average trait level for those who responded
to that item, all of which are measured on the logit, interval-level scale.
So often I have found through the use of IRT modeling that the average
trait level of those who endorse a particular response option for a
particular item (e.g. almost always) is lower than the average trait level
for a particular response option for that same item which was assumed to be
at a lower level (e.g., often); i.e.,
Never=1, Sometimes=2, Rarely=3, Often=4, and Almost Always=5
I have also come across disordered andrich thresholds...
This is one of many examples. These diagnostics, if you will, should be
evaluated and remedied before analysis, whenever possible.
The point I made above is intrinsically connected to David's point about
the assumption of having an interval-level measure; the idea behind a type
of Rasch rating scale, for example, is that one need not make such an
assumption. In fact, the point is to convert raw scores from an ordinal
level scale, at best, to an interval level measure via an adjacent-category
regression logistic model, and by doing so, one shines a big bright light
on the assumptions (e.g., interval level) built into the original scale,
both at the response option level, as well as the item level. There are
other related benefits that I do not have time to discuss.
I do appreciate Rich's suggestion, however, depending on the OPs experience
with *contemporary* psychometrics as well as access to the data; attempting
to convert an ordinal-level scale, at best, to an interval-level measure
may not be feasible.
*Taking advantage of the properties of the logit scale is by no means a new
idea (George Rasch's original article dates back to 1960), but most
theories of measurement textbooks label it as contemporary to distinguish
it from Classical Test Theory (CTT).

I'm not sure if this is exactly what David was after, but this seems to be
related, at least to some degree.

Green tea, no sugar, please. :-)
Best,
Ryan


On Sat, Apr 20, 2013 at 11:34 AM, David Marso <[hidden email]> wrote:

> Note: If you proceed with the idea of using Sum(Pi * Xi) as a 'mean'
> response
> as suggested previously by Rich you are adopting the dubious assumption
> that
> the 'measure' is ratio (equal interval etc...).
> People do it all the time doesn't make it right!  Several decades ago the
> ideas/seeds of optimal scaling, correspondence analysis and the like were
> planted but seem not to have not blossomed in the minds of most analysts (I
> was mucking around in those fields about 25 years ago).  My last thoughts
> on
> this matter.  Good luck with the thesis.
> http://www.search.ask.com/web?l=dis&o=15555&qsrc=2873&q=Optimal%20scaling
> I'm now clawing my way out of the rabbit hole.
> Tea time!  Let's see we have Earl Grey, Shroom Boom, a wonderful Emperor
> Green Tea, Lipton and Hemlock.
> Toss the dice blindfolded?
>
> ---------------------------------------------------------------------------------
>
> LiesVW wrote
> > Thank you
> >
> > I didn't put all the necessary information into my first message.
> > I have a research question and I do know what I want to do with the data.
> > I just didn't know how to restructure the data, but now I have an aswer
> to
> > that.
> >
> >
> > R B wrote
> >> Hmmm. A thesis, huh? And you have come to SPSS-L with a vague question
> >> about performing a regression analysis,  without any hint of an actual
> >> research question.
> >>
> >> So, we are supposed to solve all data structuring, psychometric, and
> >> analytic issues in order to help you arrive at some unspecified
> >> regression analysis. Shall we devise your research questions for you, as
> >> well?
> >>
> >> A thesis should be to synthesize much of what you have learned during
> >> your formal education into a real-world study.
> >>
> >> Obviously if you have a specific SPSS question or even a specific
> >> statistical question as it relates to SPSS, I would be more inclined to
> >> help. But the message I just read is no where near what is acceptable
> for
> >> me to provide any assistance on this forum.
> >>
> >> As I will continue to say to students, go to your research advisor for
> >> guidance.
> >>
> >> Ryan
> >>
> >> On Apr 19, 2013, at 2:48 PM, LiesVW <
>
> >> liezewit@
>
> >> > wrote:
> >>
> >>> Hello
> >>>
> >>> I write my thesis about the quality of hospitals. Now, I found a
> dataset
> >>> about the patientsatisfaction (surveys). The output is set in % and the
> >>> % on
> >>> each question. So, my rows exists of the hospitals, the columns are the
> >>> answers.
> >>>
> >>> Example:
> >>> "% of patients who think the room was clean enough - always"
> >>> "% of paitenst who think the room was clean enough" - sometimes"
> >>> "% of paitenst who think the room was clean enough" - never"
> >>>
> >>> And so on for the other questions.
> >>>
> >>> How do I have to work with these data in SPSS?
> >>> I wanted to make a regressionanalysis, but do I have to define these
> >>> output items as "multiple response" - thing? Or can I just work with
> >>> them
> >>> seperately?
> >>>
> >>> Thanks!
> >>>
> >>>
> >>>
> >>> --
> >>> View this message in context:
> >>>
> http://spssx-discussion.1045642.n5.nabble.com/Input-Data-tp5719569.html
> >>> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
> >>>
> >>> =====================
> >>> To manage your subscription to SPSSX-L, send a message to
> >>>
>
> >> LISTSERV@.UGA
>
> >>  (not to SPSSX-L), with no body text except the
> >>> command. To leave the list, send the command
> >>> SIGNOFF SPSSX-L
> >>> For a list of commands to manage subscriptions, send the command
> >>> INFO REFCARD
> >>
> >> =====================
> >> To manage your subscription to SPSSX-L, send a message to
>
> >> LISTSERV@.UGA
>
> >>  (not to SPSSX-L), with no body text except the
> >> command. To leave the list, send the command
> >> SIGNOFF SPSSX-L
> >> For a list of commands to manage subscriptions, send the command
> >> INFO REFCARD
>
>
>
>
>
> -----
> Please reply to the list and not to my personal email.
> Those desiring my consulting or training services please feel free to
> email me.
> ---
> "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos
> ne forte conculcent eas pedibus suis."
> Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in
> abyssum?"
> --
> View this message in context:
> http://spssx-discussion.1045642.n5.nabble.com/Input-Data-tp5719569p5719596.html
> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
Reply | Threaded
Open this post in threaded view
|

Re: Input Data

Rich Ulrich
In reply to this post by Ryan
I wrote in my earlier post -
  What do you do with these numbers?  The simple and direct approach,
   which I think should be the first approach, is to convert those percentages
   (for "Clean", and for other items) into a single score for each topic. 

  Thus, by scoring (Never=1, Sometimes=2, Always=3) and multiplying by
   the percentage fractions, you recover an average-item score for each
   hospital. 

I still think that is the best *first* approach, even for an experienced analyst. 
Since it is simple and direct, you have far less concern about describing or
justifying the complicated alternatives.

1) If there is nothing there, you don't have the worry that you have to convince
anyone else that you didn't screw up in the complications.  I believe that will
be a potential problem even for a data analyst experienced in Scaling.
2) If there is something there, its face validity (to the usual novice) is superior.
3) Given what may be the political interest in the topic, private versus public
hospitals, the better *ultimate* report might make use of analyses that are
*less* sophisticated, rather than more sophisticated.  (I'm thinking of using
a dichotomy, so the report can focus on a single extreme, Never or Always.)

Is the conversion from one row-per-hospital to three rows useful for Rasch
analysis?  I assume that it must be, because it surely does not give you what
you want to use in a regression.  - From IRT or Rasch, you get values to use for
scoring that are different from (1,2,3); and then you go back to the original file
and apply those other numbers to get a score.  (Ryan also points out, I think,
that your sophisticated analysis might show that your data are inconsistent
and therefore hard to use.  That is more useful for re-designing a bad set of
data than reporting on a decent one.)

--
Rich Ulrich


Date: Sat, 20 Apr 2013 12:56:19 -0400
From: [hidden email]
Subject: Re: Input Data
To: [hidden email]

David,
You raise a critically important point, which I was not going to address. However, since you brought it up, I will, at least very briefly discuss this issue.
This is where latent trait theory (a.k.a. item response theory) is particularly helpful. If there is a latent variable (construct), then use of IRT modeling, particularly Rasch modeling (e.g., via an adjacent-category logistic regression model), could be used to evaluate the extent to which response options for each item are ordered, the trait level at which there is an equal probability of endorsing adjacent categories (andrich thresholds), and the average trait level for those who responded to that item, all of which are measured on the logit, interval-level scale.
So often I have found through the use of IRT modeling that the average trait level of those who endorse a particular response option for a particular item (e.g. almost always) is lower than the average trait level for a particular response option for that same item which was assumed to be at a lower level (e.g., often); i.e.,
Never=1, Sometimes=2, Rarely=3, Often=4, and Almost Always=5
I have also come across disordered andrich thresholds...
This is one of many examples. These diagnostics, if you will, should be evaluated and remedied before analysis, whenever possible.
The point I made above is intrinsically connected to David's point about the assumption of having an interval-level measure; the idea behind a type of Rasch rating scale, for example, is that one need not make such an assumption. In fact, the point is to convert raw scores from an ordinal level scale, at best, to an interval level measure via an adjacent-category regression logistic model, and by doing so, one shines a big bright light on the assumptions (e.g., interval level) built into the original scale, both at the response option level, as well as the item level. There are other related benefits that I do not have time to discuss.
I do appreciate Rich's suggestion, however, depending on the OPs experience with *contemporary* psychometrics as well as access to the data; attempting to convert an ordinal-level scale, at best, to an interval-level measure may not be feasible.
*Taking advantage of the properties of the logit scale is by no means a new idea (George Rasch's original article dates back to 1960), but most theories of measurement textbooks label it as contemporary to distinguish it from Classical Test Theory (CTT).
 
I'm not sure if this is exactly what David was after, but this seems to be related, at least to some degree.
 
...snip, previous
Reply | Threaded
Open this post in threaded view
|

Automatic reply: Input Data

John McConnell-2
In reply to this post by LiesVW

Hi

 

I'll be out of the office today - Monday 22nd April.

 

I will check mail intermittently and get back to you as soon as i can

 

Thanks

John

 

 

Reply | Threaded
Open this post in threaded view
|

Re: Input Data

John F Hall
In reply to this post by LiesVW
Don't be put off by negative comments.  Your project is legitimate and your
Excel file is exactly the same format as SPSS uses and can be directly
imported into SPSS.  SPSS is far superior to Excel for statistical analysis.
Do you have access to SPSS?

If so you can import your data and variable names into the SPSS Data Editor
with:

File > New > syntax:

Get data /type xlsx
 /file '<filename>'.

I don't know how large your data set is or whether you have access to the
original raw ratings, but I would be happier if the raw data were there as
well.  You can do a hell of a lot with frequencies and crosstabs (and also
with barcharts) before thinking about regression.


John F Hall (Mr)
[Retired academic survey researcher]

Email:   [hidden email]
Website: www.surveyresearch.weebly.com
Start page:  www.surveyresearch.weebly.com/spss-without-tears.html







-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
LiesVW
Sent: 20 April 2013 13:00
To: [hidden email]
Subject: Re: Input Data

Hello!

First of all, thank you everybody for your answers and time! I'm a Belgian
student at the Free University of Brussels. If I make any spelling mistakes,
I' already apologize for that.

With the data, I want to analyse if there's a difference in the care that
hospitals give in the US. I analyze 3 kind of hospitals: profit (private),
non-profit (public) and private hospitals. The two basic research questions
are:
"Private hospitals give a better quality of care to their patients than
public hospitals."
"Private profit hospitals give less quality of care than public and
non-profit hospitals."

@Rich Ulrich, yes, I want to put the three 'options' at each question into
one variable, like 'Cleanness' and so on. I have a book about SPSS and
there's an explanation about 'multiple response' but when the input is
different, so that you only have '1' , '2' or '3' as 'answers'. But I can't
work with that method, do I? So do I have to multiply the percentages with
'a degree of satisfaction'? Like, 0,80*1?
But then I still have three 'suboutcomes' at each variable. Can I work with
those data? And how do I define these variables in SPSS, like 'Cleanness'?

There's a question "Would you recommend the hospital?" -> 'Yes, absolutely'
, 'Yes, probably' and 'No' and that would be my indicator. When I know which
hospital type provide the best quality (I think I can get an answer by just
seeing which hospital type gets the most 'Yes, absolutely' as answer, but I
just don't know if the input in % is okay the way it is now), I want to make
a regressionanalysis (if possible) to see which variables are the most
correlated to the recommend of the hospital and if the definition of good
quality differs between the hospital types (according to the patients). Is
it possible?

I will make an image of my excel file!


Thanks again everyone!

<http://spssx-discussion.1045642.n5.nabble.com/file/n5719586/Excel.png>



--
View this message in context:
http://spssx-discussion.1045642.n5.nabble.com/Input-Data-tp5719569p5719586.h
tml
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command SIGNOFF SPSSX-L For a list of
commands to manage subscriptions, send the command INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Input Data

LiesVW
Hello!

Thank you for your positive comment :)
I don't have the raw data, only the %.
I know I can do a lot with descriptive
statistics, frequencies, cross-tabs and I
already did that. But the problem is that
every answer-option at each question takes
a row in excel, as you can see.
So with the command 'varstocases' it's possible
to rename the variables like communication,
cleanness, ... It's nicer to read in stead of Q.1, Q.2, ...
But do I have to multiply the % first with the scores that
I get with that test before I do the 'varstocase?
Because otherwise, I work with % and I don't think
that will give me the right output?

The professor that helps me with my thesis said that
a probit regression anaysis would be the best option, if
I would do a regression analysis. So is it possible to work
with the scores I get (the % * the score that I get after
the test for 'never', 'sometimes' and 'never') for each hospital and do that probit regression,
after I did the 'varstocases'?

Thanks again!! A appreciate all of the help I get here!!
Reply | Threaded
Open this post in threaded view
|

Re: Input Data

LiesVW
Sorry, I mean logit regression analysis!!
Reply | Threaded
Open this post in threaded view
|

Re: Input Data

Rich Ulrich
In reply to this post by LiesVW

[re-sent]
Yes, exactly.

To answer an earlier question:  "Yes, that does throw away
information, but (usually) not much."  The Information Response
Theory/ Rasch modeling -- that David and Ryan would have you do --
would result, usually, in the same final step.  Their final step lets
you create the final score similarly, but instead of scoring (1,2,3),
it might score while using (1, 1.9, 3) as the terms to multiply by.
These unequal intervals like .9, 1.1  are derived from the data.

SOME THEORY.
In the 1930s, Likert showed that there is very little impact on analyses
and conclusions when you use the integer values for  near-interval-spaced
True-scores.  IIRC, it was Likert who also showed that when it comes to
constructing total-scores, unequal weights can cost you more in reliability
(by shortening the effective "length" of the scale) than you gain in precision.

These insights served psychometrics very well, with very little challenge,
for the next 60 years.  However, in the 1990s, computer software and
statistical theory had advanced enough so that it became feasible to define
and defend unequal intervals.  The "techniques" also contribute to an aura
of sophistication and precision, even when the sample Ns are too small or the
scales are too short to actually benefit.  Computers score up rating scales
just about as fast and accurately, regardless of the scoring algorithms.

--
Rich Ulrich

> Date: Sat, 20 Apr 2013 04:35:26 -0700

> From: [hidden email]
> Subject: Re: Input Data
> To: [hidden email]
>
> Thank you!
>
> To start, I will make those scores for the type of hospitals! So I have to
> multiply the outputs (%) with the scores en then make the sum of them? So
> that every hospital will have a score at 3?
> Example: 0.08 * 1 + 0.12 * 2 + 0.80 * 3 = 2.72?
>
>
>
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Input-Data-tp5719569p5719591.html

Reply | Threaded
Open this post in threaded view
|

Re: Input Data

Ryan
Rich,

Your statement that I would have anyone do anything is uncalled for. Yes, if one wants to develop an interval level measure, the Rasch model has been shown to be useful in that task. Moreover, your explanation of IRT modeling is a gross oversimplification. Calibration of response options go hand-in-hand with calibration of items. Psychometric properties such as reliability are far more sensitively measured using Rasch/IRT modeling. Dimensionality is often confounded by item calibrations by use of EFA/PCA on the raw scores. And the list goes on and on. 

This dismissive attitude is unscientific, at best. 

Ryan 

Sent from my iPhone

On Apr 22, 2013, at 7:49 PM, Rich Ulrich <[hidden email]> wrote:


[re-sent]
Yes, exactly.

To answer an earlier question:  "Yes, that does throw away
information, but (usually) not much."  The Information Response
Theory/ Rasch modeling -- that David and Ryan would have you do --
would result, usually, in the same final step.  Their final step lets
you create the final score similarly, but instead of scoring (1,2,3),
it might score while using (1, 1.9, 3) as the terms to multiply by.
These unequal intervals like .9, 1.1  are derived from the data.

SOME THEORY.
In the 1930s, Likert showed that there is very little impact on analyses
and conclusions when you use the integer values for  near-interval-spaced
True-scores.  IIRC, it was Likert who also showed that when it comes to
constructing total-scores, unequal weights can cost you more in reliability
(by shortening the effective "length" of the scale) than you gain in precision.

These insights served psychometrics very well, with very little challenge,
for the next 60 years.  However, in the 1990s, computer software and
statistical theory had advanced enough so that it became feasible to define
and defend unequal intervals.  The "techniques" also contribute to an aura
of sophistication and precision, even when the sample Ns are too small or the
scales are too short to actually benefit.  Computers score up rating scales
just about as fast and accurately, regardless of the scoring algorithms.

--
Rich Ulrich

> Date: Sat, 20 Apr 2013 04:35:26 -0700

> From: [hidden email]
> Subject: Re: Input Data
> To: [hidden email]
>
> Thank you!
>
> To start, I will make those scores for the type of hospitals! So I have to
> multiply the outputs (%) with the scores en then make the sum of them? So
> that every hospital will have a score at 3?
> Example: 0.08 * 1 + 0.12 * 2 + 0.80 * 3 = 2.72?
>
>
>
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Input-Data-tp5719569p5719591.html

Reply | Threaded
Open this post in threaded view
|

Re: Input Data

Ryan
In reply to this post by Rich Ulrich
Suppose an instrument is intended to measure an attribute consisting of ten items. Where do those items fall exactly along the continuum on the construct? How well do the items target the sampled individuals? Where exactly is the item most reliable along the continuum? Where along the construct is there a lack of item coverage? Perhaps we have over coverage in one area but little to no coverage in another. One could actually increase reliability by simply replacing redundant items. Shall we weight an item which requires a low level of an attribute to be weighted equally to items which require a greater amount of the attribute? What are the implications for persons' estimated trait levels if we do? How well does each item conform to the unidimensional construct? Why is it not conforming? Outlier misfit pattern? Inlier misfit pattern? Dimensional? How well do individuals or groups of individuals conform to the model? Unique clinical presentations can easily be identified through a Rasch model.

I am scratching the surface of the possibilities of a Rasch model. There is a reason the educational field has moved in this direction. There is a reason the Stanford Binet has now incorporated Rasch models. There is a reason computer adaptive testing is based upon IRT modeling and not CTT, such that one can rapidly arrive at an individual's trait level without having to administer all items equally as well as administering a full test...

Informative discourse is useful. Uninformed simplifications are and finger pointing are not.

Assuming that use of an advanced psychometric technique is unnecessary without empirical evidence is...

I have enjoyed our exchanges and learn a great deal from you but I do not agree with oversimplified remarks about contemporary psychometrics, which have shown to move measurement field forward substantially. 

Ryan 

On Apr 22, 2013, at 7:49 PM, Rich Ulrich <[hidden email]> wrote:


[re-sent]
Yes, exactly.

To answer an earlier question:  "Yes, that does throw away
information, but (usually) not much."  The Information Response
Theory/ Rasch modeling -- that David and Ryan would have you do --
would result, usually, in the same final step.  Their final step lets
you create the final score similarly, but instead of scoring (1,2,3),
it might score while using (1, 1.9, 3) as the terms to multiply by.
These unequal intervals like .9, 1.1  are derived from the data.

SOME THEORY.
In the 1930s, Likert showed that there is very little impact on analyses
and conclusions when you use the integer values for  near-interval-spaced
True-scores.  IIRC, it was Likert who also showed that when it comes to
constructing total-scores, unequal weights can cost you more in reliability
(by shortening the effective "length" of the scale) than you gain in precision.

These insights served psychometrics very well, with very little challenge,
for the next 60 years.  However, in the 1990s, computer software and
statistical theory had advanced enough so that it became feasible to define
and defend unequal intervals.  The "techniques" also contribute to an aura
of sophistication and precision, even when the sample Ns are too small or the
scales are too short to actually benefit.  Computers score up rating scales
just about as fast and accurately, regardless of the scoring algorithms.

--
Rich Ulrich

> Date: Sat, 20 Apr 2013 04:35:26 -0700

> From: [hidden email]
> Subject: Re: Input Data
> To: [hidden email]
>
> Thank you!
>
> To start, I will make those scores for the type of hospitals! So I have to
> multiply the outputs (%) with the scores en then make the sum of them? So
> that every hospital will have a score at 3?
> Example: 0.08 * 1 + 0.12 * 2 + 0.80 * 3 = 2.72?
>
>
>
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Input-Data-tp5719569p5719591.html

Reply | Threaded
Open this post in threaded view
|

Re: Input Data

David Marso
Administrator
Throwing a little more wood into the fire:
At current glance we have NO idea what OP's actual pseudo 'MR questions' are, whether they would actually be amenable to scaling in an IRT/NL_PCA/MCA context.  Since the original data are aggregated percentages across many? respondents for a given unit of observation (hospital)  I am most uncertain how to plug that into an IRT umbrella.  There might be statistically more appropriate approaches than Score(j)=Sum_i(P(ij)*X(ij)), but I don't have the time or motivation to go there at the moment.
CEFTW (Close enough for Thesis work?).

---
Subscribe SAS-L Anonymous-2 wrote
Suppose an instrument is intended to measure an attribute consisting of ten items. Where do those items fall exactly along the continuum on the construct? How well do the items target the sampled individuals? Where exactly is the item most reliable along the continuum? Where along the construct is there a lack of item coverage? Perhaps we have over coverage in one area but little to no coverage in another. One could actually increase reliability by simply replacing redundant items. Shall we weight an item which requires a low level of an attribute to be weighted equally to items which require a greater amount of the attribute? What are the implications for persons' estimated trait levels if we do? How well does each item conform to the unidimensional construct? Why is it not conforming? Outlier misfit pattern? Inlier misfit pattern? Dimensional? How well do individuals or groups of individuals conform to the model? Unique clinical presentations can easily be identified through a Rasch model.

I am scratching the surface of the possibilities of a Rasch model. There is a reason the educational field has moved in this direction. There is a reason the Stanford Binet has now incorporated Rasch models. There is a reason computer adaptive testing is based upon IRT modeling and not CTT, such that one can rapidly arrive at an individual's trait level without having to administer all items equally as well as administering a full test...

Informative discourse is useful. Uninformed simplifications are and finger pointing are not.

Assuming that use of an advanced psychometric technique is unnecessary without empirical evidence is...

I have enjoyed our exchanges and learn a great deal from you but I do not agree with oversimplified remarks about contemporary psychometrics, which have shown to move measurement field forward substantially.

Ryan

On Apr 22, 2013, at 7:49 PM, Rich Ulrich <[hidden email]> wrote:

>
> [re-sent]
> Yes, exactly.
>
> To answer an earlier question:  "Yes, that does throw away
> information, but (usually) not much."  The Information Response
> Theory/ Rasch modeling -- that David and Ryan would have you do --
> would result, usually, in the same final step.  Their final step lets
> you create the final score similarly, but instead of scoring (1,2,3),
> it might score while using (1, 1.9, 3) as the terms to multiply by.
> These unequal intervals like .9, 1.1  are derived from the data.
>
> SOME THEORY.
> In the 1930s, Likert showed that there is very little impact on analyses
> and conclusions when you use the integer values for  near-interval-spaced
> True-scores.  IIRC, it was Likert who also showed that when it comes to
> constructing total-scores, unequal weights can cost you more in reliability
> (by shortening the effective "length" of the scale) than you gain in precision.
>
> These insights served psychometrics very well, with very little challenge,
> for the next 60 years.  However, in the 1990s, computer software and
> statistical theory had advanced enough so that it became feasible to define
> and defend unequal intervals.  The "techniques" also contribute to an aura
> of sophistication and precision, even when the sample Ns are too small or the
> scales are too short to actually benefit.  Computers score up rating scales
> just about as fast and accurately, regardless of the scoring algorithms.
>
> --
> Rich Ulrich
>
> > Date: Sat, 20 Apr 2013 04:35:26 -0700
> > From: [hidden email]
> > Subject: Re: Input Data
> > To: [hidden email]
> >
> > Thank you!
> >
> > To start, I will make those scores for the type of hospitals! So I have to
> > multiply the outputs (%) with the scores en then make the sum of them? So
> > that every hospital will have a score at 3?
> > Example: 0.08 * 1 + 0.12 * 2 + 0.80 * 3 = 2.72?
> >
> >
> >
> > --
> > View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Input-Data-tp5719569p5719591.html
>
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
Reply | Threaded
Open this post in threaded view
|

Re: Input Data

Rich Ulrich
In reply to this post by Ryan
Ryan,
You are right about one thing.  Before this reply, the original poster
was being led down this wrong path by replies that were descriptive and
generally neutral.  I think I was attributing a responsibility for pointing out
that it could be difficult or impossible (with these data) to do the usual
IRT modeling.  (For one thing:  the only variance available is between-
facility, rather than between-subject, within-facility.)   And that the only
reason for re-writing data, Casestovars, was in hopes that this form
would help do that modeling.  From David's newer post, he has doubts
about that, too.

Here, you do add some helpful, further description of what can be done.
However, I believe you go beyond what is "scientific" when you say, "far more
sensitively measured."  I've never seen that.

My own description was brief, and focused otherwise.  Despite your
comments, I don't see anything that I should change.

Finally, many of the benefits, including what you point to, are ones that
I see as most useful for the people who are developing items and scales
before the scale is published. 

--
Rich Ulrich

Date: Mon, 22 Apr 2013 20:05:01 -0400
From: [hidden email]
Subject: Re: Input Data
To: [hidden email]

Rich,

Your statement that I would have anyone do anything is uncalled for. Yes, if one wants to develop an interval level measure, the Rasch model has been shown to be useful in that task. Moreover, your explanation of IRT modeling is a gross oversimplification. Calibration of response options go hand-in-hand with calibration of items. Psychometric properties such as reliability are far more sensitively measured using Rasch/IRT modeling. Dimensionality is often confounded by item calibrations by use of EFA/PCA on the raw scores. And the list goes on and on. 

This dismissive attitude is unscientific, at best. 

Ryan 

Sent from my iPhone

On Apr 22, 2013, at 7:49 PM, Rich Ulrich <[hidden email]> wrote:


[re-sent]
Yes, exactly.

To answer an earlier question:  "Yes, that does throw away
information, but (usually) not much."  The Information Response
Theory/ Rasch modeling -- that David and Ryan would have you do --
would result, usually, in the same final step.  Their final step lets
you create the final score similarly, but instead of scoring (1,2,3),
it might score while using (1, 1.9, 3) as the terms to multiply by.
These unequal intervals like .9, 1.1  are derived from the data.

SOME THEORY.
In the 1930s, Likert showed that there is very little impact on analyses
and conclusions when you use the integer values for  near-interval-spaced
True-scores.  IIRC, it was Likert who also showed that when it comes to
constructing total-scores, unequal weights can cost you more in reliability
(by shortening the effective "length" of the scale) than you gain in precision.

These insights served psychometrics very well, with very little challenge,
for the next 60 years.  However, in the 1990s, computer software and
statistical theory had advanced enough so that it became feasible to define
and defend unequal intervals.  The "techniques" also contribute to an aura
of sophistication and precision, even when the sample Ns are too small or the
scales are too short to actually benefit.  Computers score up rating scales
just about as fast and accurately, regardless of the scoring algorithms.

--
Rich Ulrich

> Date: Sat, 20 Apr 2013 04:35:26 -0700

> From: [hidden email]
> Subject: Re: Input Data
> To: [hidden email]
>
> Thank you!
>
> To start, I will make those scores for the type of hospitals! So I have to
> multiply the outputs (%) with the scores en then make the sum of them? So
> that every hospital will have a score at 3?
> Example: 0.08 * 1 + 0.12 * 2 + 0.80 * 3 = 2.72?
>
>
>
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Input-Data-tp5719569p5719591.html

Reply | Threaded
Open this post in threaded view
|

Automatic reply: Input Data

Fuller, Matthew
I will be out of the office until April 25th with limited access to e-mail.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Input Data

Ryan
In reply to this post by Rich Ulrich
Responses are interspersed below.

On Tue, Apr 23, 2013 at 11:33 PM, Rich Ulrich <[hidden email]> wrote:
Ryan,
You are right about one thing. 
Before this reply, the original poster
was being led down this wrong path by replies that were descriptive and
generally neutral.  I think I was attributing a responsibility for pointing out
that it could be difficult or impossible (with these data) to do the usual
IRT modeling.  (For one thing:  the only variance available is between-
facility, rather than between-subject, within-facility.)   And that the only
reason for re-writing data, Casestovars, was in hopes that this form
would help do that modeling.  From David's newer post, he has doubts
about that, too.
 
My response was really a reply to David's response about assuming interval-level data, simply because response options would *appear* to produce interval-level data. I have felt, since the beginning, that the OP should seek consultation from his/her advisor. I did not and will not carefully consider the OP's message. However, I will say that if one does not have the original raw data, the possibility of evaluating certain psychometric properties has been lost. I cannot locate my original response, but I'm fairly certain that I said that examination of interval-level properties should be evaluated "when possible." If I didn't, then I should have. Obviously if it is not possible, then the point is moot. 
 

Here, you do add some helpful, further description of what can be done.
However, I believe you go beyond what is "scientific" when you say, "far more
sensitively measured."  I've never seen that.
 
In general, I have found that through the use of IRT, converting ordinal level data (at best) to interval-level data tends to produce a considerable gain in precision. Obviously this will not always occur, but this has been what I have observed in my own work and published work I have read over the years.
 

My own description was brief, and focused otherwise.  Despite your
comments, I don't see anything that I should change.
 
Calling *me" out by name in a public forum, and suggesting that I would advise the OP to go down a certain road, which would likely result in x, y , and z, is what I did/do not appreciate. Let me be very clear. If it appeared as though I was suggesting that the OP use IRT, that was not my intent. I make NO recommendations to the OP. I simply enjoyed the fact that another poster pointed out the importance of considering the interval-level assumption.
 
I do not think that converting scale scores, which are presumed ordinal until evaluated, (e.g. aptitude/IQ tests, personality tests, affective/mood tests), to interval-level measures makes it confusing. I think the opposite is true. It is the responsibility of the psychometrician to justify the approach and explain the scores in a clear way (just as one would need to explain an IQ score of 110). I simply do not have time to elaborate on this point.
 
Anyone is free to respond to this post, but I moving on. I've already gone OT enough with respect to this thread and SPSS-L.

Finally, many of the benefits, including what you point to, are ones that
I see as most useful for the people who are developing items and scales
before the scale is published. 
At times, IRT has the potential to allow one to carefully examine the points I made in previous posts in this thread, in ways superior to CTT-based methods. Again, I simply do not have time to elaborate.

--
Rich Ulrich

Date: Mon, 22 Apr 2013 20:05:01 -0400
From: [hidden email]

Subject: Re: Input Data
To: [hidden email]

Rich,

Your statement that I would have anyone do anything is uncalled for. Yes, if one wants to develop an interval level measure, the Rasch model has been shown to be useful in that task. Moreover, your explanation of IRT modeling is a gross oversimplification. Calibration of response options go hand-in-hand with calibration of items. Psychometric properties such as reliability are far more sensitively measured using Rasch/IRT modeling. Dimensionality is often confounded by item calibrations by use of EFA/PCA on the raw scores. And the list goes on and on. 

This dismissive attitude is unscientific, at best. 

Ryan 

Sent from my iPhone

On Apr 22, 2013, at 7:49 PM, Rich Ulrich <[hidden email]> wrote:


[re-sent]
Yes, exactly.

To answer an earlier question:  "Yes, that does throw away
information, but (usually) not much."  The Information Response
Theory/ Rasch modeling -- that David and Ryan would have you do --
would result, usually, in the same final step.  Their final step lets
you create the final score similarly, but instead of scoring (1,2,3),
it might score while using (1, 1.9, 3) as the terms to multiply by.
These unequal intervals like .9, 1.1  are derived from the data.

SOME THEORY.
In the 1930s, Likert showed that there is very little impact on analyses
and conclusions when you use the integer values for  near-interval-spaced
True-scores.  IIRC, it was Likert who also showed that when it comes to
constructing total-scores, unequal weights can cost you more in reliability
(by shortening the effective "length" of the scale) than you gain in precision.

These insights served psychometrics very well, with very little challenge,
for the next 60 years.  However, in the 1990s, computer software and
statistical theory had advanced enough so that it became feasible to define
and defend unequal intervals.  The "techniques" also contribute to an aura
of sophistication and precision, even when the sample Ns are too small or the
scales are too short to actually benefit.  Computers score up rating scales
just about as fast and accurately, regardless of the scoring algorithms.

--
Rich Ulrich

> Date: Sat, 20 Apr 2013 04:35:26 -0700
> From: [hidden email]
> Subject: Re: Input Data
> To: [hidden email]
>
> Thank you!
>
> To start, I will make those scores for the type of hospitals! So I have to
> multiply the outputs (%) with the scores en then make the sum of them? So
> that every hospital will have a score at 3?
> Example: 0.08 * 1 + 0.12 * 2 + 0.80 * 3 = 2.72?
>
>
>
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Input-Data-tp5719569p5719591.html


Reply | Threaded
Open this post in threaded view
|

Re: Input Data

Ryan
I decided to read my original response, and I see that I pointed out that the OP may not be able to perform IRT given limited access to data. This reinforces the point I just made; my response was really not intended for the OP. It was to expound upon David's comment. Okay. I really must move on from this discussion for various reasons.
 
Perhaps in the future I will have the opportunity to join fruitful discussions on the use of psychometric techniques via SPSS.
 
Ryan


On Wed, Apr 24, 2013 at 10:56 AM, R B <[hidden email]> wrote:
Responses are interspersed below.

On Tue, Apr 23, 2013 at 11:33 PM, Rich Ulrich <[hidden email]> wrote:
Ryan,
You are right about one thing. 
Before this reply, the original poster
was being led down this wrong path by replies that were descriptive and
generally neutral.  I think I was attributing a responsibility for pointing out
that it could be difficult or impossible (with these data) to do the usual
IRT modeling.  (For one thing:  the only variance available is between-
facility, rather than between-subject, within-facility.)   And that the only
reason for re-writing data, Casestovars, was in hopes that this form
would help do that modeling.  From David's newer post, he has doubts
about that, too.
 
My response was really a reply to David's response about assuming interval-level data, simply because response options would *appear* to produce interval-level data. I have felt, since the beginning, that the OP should seek consultation from his/her advisor. I did not and will not carefully consider the OP's message. However, I will say that if one does not have the original raw data, the possibility of evaluating certain psychometric properties has been lost. I cannot locate my original response, but I'm fairly certain that I said that examination of interval-level properties should be evaluated "when possible." If I didn't, then I should have. Obviously if it is not possible, then the point is moot. 
 

Here, you do add some helpful, further description of what can be done.
However, I believe you go beyond what is "scientific" when you say, "far more
sensitively measured."  I've never seen that.
 
In general, I have found that through the use of IRT, converting ordinal level data (at best) to interval-level data tends to produce a considerable gain in precision. Obviously this will not always occur, but this has been what I have observed in my own work and published work I have read over the years.
 

My own description was brief, and focused otherwise.  Despite your
comments, I don't see anything that I should change.
 
Calling *me" out by name in a public forum, and suggesting that I would advise the OP to go down a certain road, which would likely result in x, y , and z, is what I did/do not appreciate. Let me be very clear. If it appeared as though I was suggesting that the OP use IRT, that was not my intent. I make NO recommendations to the OP. I simply enjoyed the fact that another poster pointed out the importance of considering the interval-level assumption.
 
I do not think that converting scale scores, which are presumed ordinal until evaluated, (e.g. aptitude/IQ tests, personality tests, affective/mood tests), to interval-level measures makes it confusing. I think the opposite is true. It is the responsibility of the psychometrician to justify the approach and explain the scores in a clear way (just as one would need to explain an IQ score of 110). I simply do not have time to elaborate on this point.
 
Anyone is free to respond to this post, but I moving on. I've already gone OT enough with respect to this thread and SPSS-L.

Finally, many of the benefits, including what you point to, are ones that
I see as most useful for the people who are developing items and scales
before the scale is published. 
At times, IRT has the potential to allow one to carefully examine the points I made in previous posts in this thread, in ways superior to CTT-based methods. Again, I simply do not have time to elaborate.

--
Rich Ulrich

Date: Mon, 22 Apr 2013 20:05:01 -0400
From: [hidden email]

Subject: Re: Input Data
To: [hidden email]

Rich,

Your statement that I would have anyone do anything is uncalled for. Yes, if one wants to develop an interval level measure, the Rasch model has been shown to be useful in that task. Moreover, your explanation of IRT modeling is a gross oversimplification. Calibration of response options go hand-in-hand with calibration of items. Psychometric properties such as reliability are far more sensitively measured using Rasch/IRT modeling. Dimensionality is often confounded by item calibrations by use of EFA/PCA on the raw scores. And the list goes on and on. 

This dismissive attitude is unscientific, at best. 

Ryan 

Sent from my iPhone

On Apr 22, 2013, at 7:49 PM, Rich Ulrich <[hidden email]> wrote:


[re-sent]
Yes, exactly.

To answer an earlier question:  "Yes, that does throw away
information, but (usually) not much."  The Information Response
Theory/ Rasch modeling -- that David and Ryan would have you do --
would result, usually, in the same final step.  Their final step lets
you create the final score similarly, but instead of scoring (1,2,3),
it might score while using (1, 1.9, 3) as the terms to multiply by.
These unequal intervals like .9, 1.1  are derived from the data.

SOME THEORY.
In the 1930s, Likert showed that there is very little impact on analyses
and conclusions when you use the integer values for  near-interval-spaced
True-scores.  IIRC, it was Likert who also showed that when it comes to
constructing total-scores, unequal weights can cost you more in reliability
(by shortening the effective "length" of the scale) than you gain in precision.

These insights served psychometrics very well, with very little challenge,
for the next 60 years.  However, in the 1990s, computer software and
statistical theory had advanced enough so that it became feasible to define
and defend unequal intervals.  The "techniques" also contribute to an aura
of sophistication and precision, even when the sample Ns are too small or the
scales are too short to actually benefit.  Computers score up rating scales
just about as fast and accurately, regardless of the scoring algorithms.

--
Rich Ulrich

> Date: Sat, 20 Apr 2013 04:35:26 -0700
> From: [hidden email]
> Subject: Re: Input Data
> To: [hidden email]
>
> Thank you!
>
> To start, I will make those scores for the type of hospitals! So I have to
> multiply the outputs (%) with the scores en then make the sum of them? So
> that every hospital will have a score at 3?
> Example: 0.08 * 1 + 0.12 * 2 + 0.80 * 3 = 2.72?
>
>
>
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Input-Data-tp5719569p5719591.html



Reply | Threaded
Open this post in threaded view
|

Re: Input Data

David Marso
Administrator
I concur that the data are not amenable to IRT.
I suspect my initial stab at Multiple Variables -> one variable using V2C was probably premature.
OTOH:  Here is my redemption post ;-)
It ends up with the Sum(Pi*Xi) as a result.
Maybe a little easier than creating tons of computes across lots of variables.

DATA LIST LIST / hinfo q1_1 to q1_3 q2_1 to q2_3 .
BEGIN DATA
1 .50 .30 .20    .90 .05 .05
2 .30 .10 .40    .20 .70 .10
END DATA.

VARSTOCASES /ID = id
 /MAKE Q1 FROM q1_1 TO q1_3
 /MAKE Q2 FROM q2_1 TO q2_3
 /INDEX = R_Value(3)
 /KEEP =  hinfo.

DO REPEAT p=Q1 Q2 / WR=WR1 WR2 .
COMPUTE WR=P*R_Value.
END REPEAT.
AGGREGATE OUTFILE * / BREAK hInfo / Scale1 Scale2=SUM(WR1 WR2).
LIST.

   HINFO   SCALE1   SCALE2

    1.00     1.70     1.15
    2.00     1.70     1.90


Number of cases read:  2    Number of cases listed:  2

Subscribe SAS-L Anonymous-2 wrote
I decided to read my original response, and I see that I pointed out that
the OP may not be able to perform IRT given limited access to data. This
reinforces the point I just made; my response was really not intended for
the OP. It was to expound upon David's comment. Okay. I really must move on
from this discussion for various reasons.

Perhaps in the future I will have the opportunity to join fruitful
discussions on the use of psychometric techniques via SPSS.

Ryan


On Wed, Apr 24, 2013 at 10:56 AM, R B <[hidden email]> wrote:

> Responses are interspersed below.
>
> On Tue, Apr 23, 2013 at 11:33 PM, Rich Ulrich <[hidden email]>wrote:
>
>> Ryan,
>> You are right about one thing.
>>
> Before this reply, the original poster
>> was being led down this wrong path by replies that were descriptive and
>> generally neutral.  I think I was attributing a responsibility for
>> pointing out
>> that it could be difficult or impossible (with these data) to do the
>> usual
>> IRT modeling.  (For one thing:  the only variance available is between-
>> facility, rather than between-subject, within-facility.)   And that the
>> only
>> reason for re-writing data, Casestovars, was in hopes that this form
>> would help do that modeling.  From David's newer post, he has doubts
>> about that, too.
>>
>
> My response was really a reply to David's response about assuming
> interval-level data, simply because response options would *appear* to
> produce interval-level data. I have felt, since the beginning, that the OP
> should seek consultation from his/her advisor. I did not and will not
> carefully consider the OP's message. However, I will say that if one does
> not have the original raw data, the possibility of evaluating certain
> psychometric properties has been lost. I cannot locate my original
> response, but I'm fairly certain that I said that examination of
> interval-level properties should be evaluated "when possible." If I didn't,
> then I should have. Obviously if it is not possible, then the point is
> moot.
>
>
>>
>> Here, you do add some helpful, further description of what can be done.
>> However, I believe you go beyond what is "scientific" when you say, "far
>> more
>> sensitively measured."  I've never seen that.
>>
>
> In general, I have found that through the use of IRT, converting ordinal
> level data (at best) to interval-level data tends to produce a considerable
> gain in precision. Obviously this will not always occur, but this has been
> what I have observed in my own work and published work I have read over the
> years.
>
>
>>
>> My own description was brief, and focused otherwise.  Despite your
>> comments, I don't see anything that I should change.
>>
>
> Calling *me" out by name in a public forum, and suggesting that I would
> advise the OP to go down a certain road, which would likely result in x, y
> , and z, is what I did/do not appreciate. Let me be very clear. If it
> appeared as though I was suggesting that the OP use IRT, that was not my
> intent. I make NO recommendations to the OP. I simply enjoyed the fact that
> another poster pointed out the importance of considering the interval-level
> assumption.
>
> I do not think that converting scale scores, which are presumed ordinal
> until evaluated, (e.g. aptitude/IQ tests, personality tests, affective/mood
> tests), to interval-level measures makes it confusing. I think the opposite
> is true. It is the responsibility of the psychometrician to justify the
> approach and explain the scores in a clear way (just as one would need to
> explain an IQ score of 110). I simply do not have time to elaborate on this
> point.
>
> Anyone is free to respond to this post, but I moving on. I've already gone
> OT enough with respect to this thread and SPSS-L.
>
>>
>> Finally, many of the benefits, including what you point to, are ones that
>> I see as most useful for the people who are developing items and scales
>> before the scale is published.
>>
> At times, IRT has the potential to allow one to carefully examine
> the points I made in previous posts in this thread, in ways superior to
> CTT-based methods. Again, I simply do not have time to elaborate.
>
>>
>> --
>> Rich Ulrich
>> ------------------------------
>> Date: Mon, 22 Apr 2013 20:05:01 -0400
>> From: [hidden email]
>>
>> Subject: Re: Input Data
>> To: [hidden email]
>>
>> Rich,
>>
>> Your statement that I would have anyone do anything is uncalled for. Yes,
>> if one wants to develop an interval level measure, the Rasch model has been
>> shown to be useful in that task. Moreover, your explanation of IRT modeling
>> is a gross oversimplification. Calibration of response options go
>> hand-in-hand with calibration of items. Psychometric properties such as
>> reliability are far more sensitively measured using Rasch/IRT modeling.
>> Dimensionality is often confounded by item calibrations by use of EFA/PCA
>> on the raw scores. And the list goes on and on.
>>
>> This dismissive attitude is unscientific, at best.
>>
>> Ryan
>>
>> Sent from my iPhone
>>
>> On Apr 22, 2013, at 7:49 PM, Rich Ulrich <[hidden email]> wrote:
>>
>>
>> [re-sent]
>> Yes, exactly.
>>
>> To answer an earlier question:  "Yes, that does throw away
>> information, but (usually) not much."  The Information Response
>> Theory/ Rasch modeling -- that David and Ryan would have you do --
>> would result, usually, in the same final step.  Their final step lets
>> you create the final score similarly, but instead of scoring (1,2,3),
>> it might score while using (1, 1.9, 3) as the terms to multiply by.
>> These unequal intervals like .9, 1.1  are derived from the data.
>>
>> SOME THEORY.
>> In the 1930s, Likert showed that there is very little impact on analyses
>> and conclusions when you use the integer values for  near-interval-spaced
>> True-scores.  IIRC, it was Likert who also showed that when it comes to
>> constructing total-scores, unequal weights can cost you more in
>> reliability
>> (by shortening the effective "length" of the scale) than you gain in
>> precision.
>>
>> These insights served psychometrics very well, with very little
>> challenge,
>> for the next 60 years.  However, in the 1990s, computer software and
>> statistical theory had advanced enough so that it became feasible to
>> define
>> and defend unequal intervals.  The "techniques" also contribute to an aura
>> of sophistication and precision, even when the sample Ns are too small or
>> the
>> scales are too short to actually benefit.  Computers score up rating
>> scales
>> just about as fast and accurately, regardless of the scoring algorithms.
>>
>> --
>> Rich Ulrich
>>
>> > Date: Sat, 20 Apr 2013 04:35:26 -0700
>> > From: [hidden email]
>> > Subject: Re: Input Data
>> > To: [hidden email]
>> >
>> > Thank you!
>> >
>> > To start, I will make those scores for the type of hospitals! So I have
>> to
>> > multiply the outputs (%) with the scores en then make the sum of them?
>> So
>> > that every hospital will have a score at 3?
>> > Example: 0.08 * 1 + 0.12 * 2 + 0.80 * 3 = 2.72?
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> http://spssx-discussion.1045642.n5.nabble.com/Input-Data-tp5719569p5719591.html
>>
>>
>
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
Reply | Threaded
Open this post in threaded view
|

Re: Input Data

John F Hall
In reply to this post by LiesVW

Not sure if I’ve got the right subject heading here (from Nabble) but just found an article on IRT which may be relevant.  Way outside my field (and original mailing therefore deleted) but it cropped up in the same list as one of my pages on www.academia.edu when someone searched for “4-point scale” on Google.

 

Otto B. Walter and Heinz Holling (University ofMünster, Germany)

Transitioning from Fixed-Length Questionnaires to Computer-Adaptive Versions

Zeitschrift für Psychologie / Journal of Psychology 2008; Vol. 216(1):22–28

 

http://iacat.org/sites/default/files/biblio/Transitioning%20from%20Fixed-Length%20Questionnaires%20to%20Computer-Adaptive%20Versions.pdf

 

 

John F Hall (Mr)

[Retired academic survey researcher]

 

Email:   [hidden email] 

Website: www.surveyresearch.weebly.com

Start page:  www.surveyresearch.weebly.com/spss-without-tears.html

  

 

 

 

 

12