Dichotomous Rasch Model

classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

Dichotomous Rasch Model

Ryan
Dear SPSS-L,
 
On occasion, I have seen posts asking how to employ Rasch models in SPSS. I thought I might take this opportunity to demonstrate how one could fit a dichotomous Rasch model (all items measured on a 0/1 scale) by employing the GENLIN procedure. However, before I show the code, let us recall the equation of the dichotomous Rasch model:
 
logit(p_ij) = log[p_ij / (1 - p_ij)] = beta_j - theta_i
 
where
 
beta_j = ability of jth person
theta_i = difficulty of ith item
 
As you can see above, the dichotomous Rasch model is nothing more than the usual binary logistic regression, with the exception that the grand intercept is removed and the item difficulties are subtracted from the person abilities. With that in mind, it should be evident that the item difficulties produced by the second EMMEANS sub-command must be multiplied by -1.0.
 
The simulation and GENLIN syntax is BELOW my name.
 
Hope this is interesting to others. I've obviously left out a lot of details. Feel free to write back if you have any questions/comments.
 
Ryan
--
 
SET SEED 873456542.
NEW FILE.
INPUT PROGRAM.
 
COMPUTE person = -99.
COMPUTE #person_ability = -99.
COMPUTE item = -99.
COMPUTE #item1_difficulty = -99.
COMPUTE #item2_difficulty = -99.
COMPUTE #item3_difficulty = -99.
COMPUTE #item4_difficulty = -99.
COMPUTE #item5_difficulty = -99.
COMPUTE #item6_difficulty = -99.
COMPUTE #item7_difficulty = -99.
COMPUTE #item8_difficulty = -99.
COMPUTE #item9_difficulty = -99.
COMPUTE #item10_difficulty = -99.
COMPUTE #item11_difficulty = -99.
COMPUTE #item12_difficulty = -99.
COMPUTE #item13_difficulty = -99.
COMPUTE #item14_difficulty = -99.
COMPUTE #item15_difficulty = -99.
COMPUTE #eta = -99.
COMPUTE #prob = -99.
COMPUTE response = -99.
 
LEAVE person to response.
 
LOOP person = 1 to 1000.
COMPUTE #person_ability = RV.NORMAL(0,1).
 
LOOP item = 1 to 15.
COMPUTE #item1_difficulty = LN(.100 / (1 - .100)).
COMPUTE #item2_difficulty = LN(.150 / (1 - .150)).
COMPUTE #item3_difficulty = LN(.200 / (1 - .200)).
COMPUTE #item4_difficulty = LN(.250 / (1 - .250)).
COMPUTE #item5_difficulty = LN(.300 / (1 - .300)).
COMPUTE #item6_difficulty = LN(.350 / (1 - .350)).
COMPUTE #item7_difficulty = LN(.400 / (1 - .400)).
COMPUTE #item8_difficulty = LN(.450 / (1 - .450)).
COMPUTE #item9_difficulty = LN(.475 / (1 - .475)).
COMPUTE #item10_difficulty = LN(.525 / (1 - .525)).
COMPUTE #item11_difficulty = LN(.550 / (1 - .550)).
COMPUTE #item12_difficulty = LN(.600 / (1 - .600)).
COMPUTE #item10_difficulty = LN(.650 / (1 - .650)).
COMPUTE #item11_difficulty = LN(.700 / (1 - .700)).
COMPUTE #item12_difficulty = LN(.750 / (1 - .750)).
COMPUTE #item13_difficulty = LN(.800 / (1 - .800)).
COMPUTE #item14_difficulty = LN(.850 / (1 - .850)).
COMPUTE #item15_difficulty = LN(.900 / (1 - .900)).
 
COMPUTE #eta =  (item=1)*(#person_ability - #item1_difficulty) +
                            (item=2)*(#person_ability - #item2_difficulty) +
                            (item=3)*(#person_ability - #item3_difficulty) +
                            (item=4)*(#person_ability - #item4_difficulty) +
                            (item=5)*(#person_ability - #item5_difficulty) +
                            (item=6)*(#person_ability - #item6_difficulty) +
                            (item=7)*(#person_ability - #item7_difficulty) +
                            (item=8)*(#person_ability - #item8_difficulty) +
                            (item=9)*(#person_ability - #item9_difficulty) +
                            (item=10)*(#person_ability - #item10_difficulty) +
                            (item=11)*(#person_ability - #item11_difficulty) +
                            (item=12)*(#person_ability - #item12_difficulty) +
                            (item=13)*(#person_ability - #item13_difficulty) +
                            (item=14)*(#person_ability - #item14_difficulty) +
                            (item=15)*(#person_ability - #item15_difficulty).
 
COMPUTE #prob = 1 / (1 + EXP(-#eta)).
COMPUTE response = RV.BERNOULLI(#prob).
 
END CASE.
END LOOP.
END LOOP.
END FILE.
END INPUT PROGRAM.
EXECUTE.
 
*Remove persons with zeros or ones on all 15 items.
AGGREGATE
  /OUTFILE=* MODE=ADDVARIABLES
  /BREAK=person
  /response_sum=SUM(response).
 
FILTER OFF.
USE ALL.
SELECT IF (response_sum~=0).
EXECUTE.
FILTER OFF.
USE ALL.
SELECT IF (response_sum~=15).
EXECUTE.
 
*Fit Dichotomous Rasch Model.
GENLIN response (REFERENCE=FIRST) BY person item (ORDER=ASCENDING)
  /MODEL person item INTERCEPT=NO
 DISTRIBUTION=BINOMIAL LINK=LOGIT
  /EMMEANS TABLES=person SCALE=TRANSFORMED
  /EMMEANS TABLES=item SCALE=TRANSFORMED
  /MISSING CLASSMISSING=EXCLUDE
  /PRINT CPS DESCRIPTIVES MODELINFO FIT SUMMARY SOLUTION.
Reply | Threaded
Open this post in threaded view
|

Re: Dichotomous Rasch Model

David Marso
Administrator
I believe that Input program can be compacted as follows.
SET SEED 873456542.
NEW FILE.
INPUT PROGRAM.
+  DO REPEAT  
      Diff= #item_difficulty_01 TO #item_difficulty_15
        /X=.10 .15 .2 .25 .3 .35 .4 .45 .475 .65 .7 .75 .8 .85 .9 .
+    COMPUTE Diff=LN(X/(1-X)).
+  END REPEAT.

+  LOOP person = 1 to 1000.
+    COMPUTE #person_ability = RV.NORMAL(0,1).
+    VECTOR item_diff=#item_difficulty_01 TO #item_difficulty_15.
+    LOOP item = 1 to 15.
+      COMPUTE #prob = 1 / (1 + EXP(-( #person_ability - item_diff(item)))).
+      COMPUTE response = RV.BERNOULLI(#prob).
       LEAVE PERSON.
+      END CASE.
+    END LOOP.
+  END LOOP.
+  END FILE.
END INPUT PROGRAM.
EXECUTE.

Ryan Black wrote
Dear SPSS-L,

On occasion, I have seen posts asking how to employ Rasch models in SPSS. I
thought I might take this opportunity to demonstrate how one could fit a
dichotomous Rasch model (all items measured on a 0/1 scale) by employing
the GENLIN procedure. However, before I show the code, let us recall the
equation of the dichotomous Rasch model:

logit(p_ij) = log[p_ij / (1 - p_ij)] = beta_j - theta_i

where

beta_j = ability of jth person
theta_i = difficulty of ith item

As you can see above, the dichotomous Rasch model is nothing more than the
usual binary logistic regression, with the exception that the grand
intercept is removed and the item difficulties are subtracted from the
person abilities. With that in mind, it should be evident that the
item difficulties produced by the second EMMEANS sub-command must be
multiplied by -1.0.

The simulation and GENLIN syntax is BELOW my name.

Hope this is interesting to others. I've obviously left out a lot of
details. Feel free to write back if you have any questions/comments.

Ryan
--

SET SEED 873456542.
NEW FILE.
INPUT PROGRAM.

COMPUTE person = -99.
COMPUTE #person_ability = -99.
COMPUTE item = -99.
COMPUTE #item1_difficulty = -99.
COMPUTE #item2_difficulty = -99.
COMPUTE #item3_difficulty = -99.
COMPUTE #item4_difficulty = -99.
COMPUTE #item5_difficulty = -99.
COMPUTE #item6_difficulty = -99.
COMPUTE #item7_difficulty = -99.
COMPUTE #item8_difficulty = -99.
COMPUTE #item9_difficulty = -99.
COMPUTE #item10_difficulty = -99.
COMPUTE #item11_difficulty = -99.
COMPUTE #item12_difficulty = -99.
COMPUTE #item13_difficulty = -99.
COMPUTE #item14_difficulty = -99.
COMPUTE #item15_difficulty = -99.
COMPUTE #eta = -99.
COMPUTE #prob = -99.
COMPUTE response = -99.

LEAVE person to response.

LOOP person = 1 to 1000.
COMPUTE #person_ability = RV.NORMAL(0,1).

LOOP item = 1 to 15.
COMPUTE #item1_difficulty = LN(.100 / (1 - .100)).
COMPUTE #item2_difficulty = LN(.150 / (1 - .150)).
COMPUTE #item3_difficulty = LN(.200 / (1 - .200)).
COMPUTE #item4_difficulty = LN(.250 / (1 - .250)).
COMPUTE #item5_difficulty = LN(.300 / (1 - .300)).
COMPUTE #item6_difficulty = LN(.350 / (1 - .350)).
COMPUTE #item7_difficulty = LN(.400 / (1 - .400)).
COMPUTE #item8_difficulty = LN(.450 / (1 - .450)).
COMPUTE #item9_difficulty = LN(.475 / (1 - .475)).
COMPUTE #item10_difficulty = LN(.525 / (1 - .525)).
COMPUTE #item11_difficulty = LN(.550 / (1 - .550)).
COMPUTE #item12_difficulty = LN(.600 / (1 - .600)).
COMPUTE #item10_difficulty = LN(.650 / (1 - .650)).
COMPUTE #item11_difficulty = LN(.700 / (1 - .700)).
COMPUTE #item12_difficulty = LN(.750 / (1 - .750)).
COMPUTE #item13_difficulty = LN(.800 / (1 - .800)).
COMPUTE #item14_difficulty = LN(.850 / (1 - .850)).
COMPUTE #item15_difficulty = LN(.900 / (1 - .900)).

COMPUTE #eta =  (item=1)*(#person_ability - #item1_difficulty) +
                            (item=2)*(#person_ability - #item2_difficulty)
+
                            (item=3)*(#person_ability - #item3_difficulty)
+
                            (item=4)*(#person_ability - #item4_difficulty) +
                            (item=5)*(#person_ability - #item5_difficulty)
+
                            (item=6)*(#person_ability - #item6_difficulty)
+
                            (item=7)*(#person_ability - #item7_difficulty)
+
                            (item=8)*(#person_ability - #item8_difficulty) +
                            (item=9)*(#person_ability - #item9_difficulty)
+
                            (item=10)*(#person_ability -
#item10_difficulty) +
                            (item=11)*(#person_ability -
#item11_difficulty) +
                            (item=12)*(#person_ability -
#item12_difficulty) +
                            (item=13)*(#person_ability -
#item13_difficulty) +
                            (item=14)*(#person_ability -
#item14_difficulty) +
                            (item=15)*(#person_ability -
#item15_difficulty).

COMPUTE #prob = 1 / (1 + EXP(-#eta)).
COMPUTE response = RV.BERNOULLI(#prob).

END CASE.
END LOOP.
END LOOP.
END FILE.
END INPUT PROGRAM.
EXECUTE.

*Remove persons with zeros or ones on all 15 items.
AGGREGATE
  /OUTFILE=* MODE=ADDVARIABLES
  /BREAK=person
  /response_sum=SUM(response).

FILTER OFF.
USE ALL.
SELECT IF (response_sum~=0).
EXECUTE.
FILTER OFF.
USE ALL.
SELECT IF (response_sum~=15).
EXECUTE.

*Fit Dichotomous Rasch Model.
GENLIN response (REFERENCE=FIRST) BY person item (ORDER=ASCENDING)
  /MODEL person item INTERCEPT=NO
 DISTRIBUTION=BINOMIAL LINK=LOGIT
  /EMMEANS TABLES=person SCALE=TRANSFORMED
  /EMMEANS TABLES=item SCALE=TRANSFORMED
  /MISSING CLASSMISSING=EXCLUDE
  /PRINT CPS DESCRIPTIVES MODELINFO FIT SUMMARY SOLUTION.
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
Reply | Threaded
Open this post in threaded view
|

Automatic reply: Dichotomous Rasch Model

Sarraf, Shimon Aaron
I am currently out of the office until March 14. If you need immediate assistance, please call 812-856-5824.
Shimon Sarraf
Center for Postsecondary Research
Indiana University Bloomington



Reply | Threaded
Open this post in threaded view
|

Re: Dichotomous Rasch Model

Ryan
In reply to this post by David Marso
UPON REFLECTION, I see that I made an error (I simulated data for items 10, 11, and 12 twice!).
 
BELOW is the fixed code where data for all 18 items are simulated. Sorry, David, but I'm still using my original code for this second post because I received errors when trying to run yours. I'm sure it's a silly mistake on my part. I will attempt your code again a little later today. I will blame my previous error and inability to use David's more efficient code on my flu, time of day, and lack of functioning brain cells, all of which are interrelated, I hope :-) )
 
For now, here is the corrected code that produces data for the 18 items that span enough of the logit scale for illustration purposes of fitting a dichotomous Rasch model via GENLIN. Note that the code to remove problematic cases had to be fixed as well (from "15" to "18"). Consequently, for those interested in this illustration, please use this code; it includes all necessary code to carry out the entire simulation experiment. Simply copy, paste, and run...).
 
*Generate data.
SET SEED 873456542.
 
NEW FILE.
INPUT PROGRAM.
 
COMPUTE person = -99.
COMPUTE #person_ability = -99.
COMPUTE item = -99.
COMPUTE #item1_difficulty = -99.
COMPUTE #item2_difficulty = -99.
COMPUTE #item3_difficulty = -99.
COMPUTE #item4_difficulty = -99.
COMPUTE #item5_difficulty = -99.
COMPUTE #item6_difficulty = -99.
COMPUTE #item7_difficulty = -99.
COMPUTE #item8_difficulty = -99.
COMPUTE #item9_difficulty = -99.
COMPUTE #item10_difficulty = -99.
COMPUTE #item11_difficulty = -99.
COMPUTE #item12_difficulty = -99.
COMPUTE #item13_difficulty = -99.
COMPUTE #item14_difficulty = -99.
COMPUTE #item15_difficulty = -99.
COMPUTE #item16_difficulty = -99.
COMPUTE #item17_difficulty = -99.
COMPUTE #item18_difficulty = -99.
 
COMPUTE #eta = -99.
COMPUTE #prob = -99.
COMPUTE response = -99.
 
LEAVE person to response.
 
LOOP person = 1 to 1000.
COMPUTE #person_ability = RV.NORMAL(0,1).
 
LOOP item = 1 to 18.
COMPUTE #item1_difficulty = LN(.100 / (1 - .100)).
COMPUTE #item2_difficulty = LN(.150 / (1 - .150)).
COMPUTE #item3_difficulty = LN(.200 / (1 - .200)).
COMPUTE #item4_difficulty = LN(.250 / (1 - .250)).
COMPUTE #item5_difficulty = LN(.300 / (1 - .300)).
COMPUTE #item6_difficulty = LN(.350 / (1 - .350)).
COMPUTE #item7_difficulty = LN(.400 / (1 - .400)).
COMPUTE #item8_difficulty = LN(.450 / (1 - .450)).
COMPUTE #item9_difficulty = LN(.475 / (1 - .475)).
COMPUTE #item10_difficulty = LN(.525 / (1 - .525)).
COMPUTE #item11_difficulty = LN(.550 / (1 - .550)).
COMPUTE #item12_difficulty = LN(.600 / (1 - .600)).
COMPUTE #item13_difficulty = LN(.650 / (1 - .650)).
COMPUTE #item14_difficulty = LN(.700 / (1 - .700)).
COMPUTE #item15_difficulty = LN(.750 / (1 - .750)).
COMPUTE #item16_difficulty = LN(.800 / (1 - .800)).
COMPUTE #item17_difficulty = LN(.850 / (1 - .850)).
COMPUTE #item18_difficulty = LN(.900 / (1 - .900)).
 
COMPUTE #eta =  (item=1)*(#person_ability - #item1_difficulty) +
                            (item=2)*(#person_ability - #item2_difficulty) +
                            (item=3)*(#person_ability - #item3_difficulty) +
                            (item=4)*(#person_ability - #item4_difficulty) +
                            (item=5)*(#person_ability - #item5_difficulty) +
                            (item=6)*(#person_ability - #item6_difficulty) +
                            (item=7)*(#person_ability - #item7_difficulty) +
                            (item=8)*(#person_ability - #item8_difficulty) +
                            (item=9)*(#person_ability - #item9_difficulty) +
                            (item=10)*(#person_ability - #item10_difficulty) +
                            (item=11)*(#person_ability - #item11_difficulty) +
                            (item=12)*(#person_ability - #item12_difficulty) +
                            (item=13)*(#person_ability - #item13_difficulty) +
                            (item=14)*(#person_ability - #item14_difficulty) +
                            (item=15)*(#person_ability - #item15_difficulty) +
                            (item=16)*(#person_ability - #item16_difficulty) +
                            (item=17)*(#person_ability - #item17_difficulty) +
                            (item=18)*(#person_ability - #item18_difficulty).
 
COMPUTE #prob = 1 / (1 + EXP(-#eta)).
COMPUTE response = RV.BERNOULLI(#prob).
END CASE.
END LOOP.
END LOOP.
END FILE.
END INPUT PROGRAM.
EXECUTE.
 
 *Remove persons with zeros or ones on all 18 items.
AGGREGATE
  /OUTFILE=* MODE=ADDVARIABLES
  /BREAK=person
  /response_sum=SUM(response).
 
FILTER OFF.
USE ALL.
SELECT IF (response_sum~=0).
 EXECUTE.
FILTER OFF.
USE ALL.
SELECT IF (response_sum~=18).
EXECUTE.
 
*Fit Dichotomous Rasch Model.
GENLIN response (REFERENCE=FIRST) BY person item (ORDER=ASCENDING)
  /MODEL person item INTERCEPT=NO
  DISTRIBUTION=BINOMIAL LINK=LOGIT
  /EMMEANS TABLES=person SCALE=TRANSFORMED
  /EMMEANS TABLES=item SCALE=TRANSFORMED
  /MISSING CLASSMISSING=EXCLUDE
  /PRINT CPS DESCRIPTIVES MODELINFO FIT SUMMARY SOLUTION.
On Mon, Mar 11, 2013 at 4:39 AM, David Marso <[hidden email]> wrote:
I believe that Input program can be compacted as follows.
SET SEED 873456542.
NEW FILE.
INPUT PROGRAM.
+  DO REPEAT
      Diff= #item_difficulty_01 TO #item_difficulty_15
        /X=.10 .15 .2 .25 .3 .35 .4 .45 .475 .65 .7 .75 .8 .85 .9 .
+    COMPUTE Diff=LN(X/(1-X)).
+  END REPEAT.

+  LOOP person = 1 to 1000.
+    COMPUTE #person_ability = RV.NORMAL(0,1).
+    VECTOR item_diff=#item_difficulty_01 TO #item_difficulty_15.
+    LOOP item = 1 to 15.
+      COMPUTE #prob = 1 / (1 + EXP(-( #person_ability - item_diff(item)))).
+      COMPUTE response = RV.BERNOULLI(#prob).
       LEAVE PERSON.
+      END CASE.
+    END LOOP.
+  END LOOP.
+  END FILE.
END INPUT PROGRAM.
EXECUTE.


Ryan Black wrote
> Dear SPSS-L,
>
> On occasion, I have seen posts asking how to employ Rasch models in SPSS.
> I
> thought I might take this opportunity to demonstrate how one could fit a
> dichotomous Rasch model (all items measured on a 0/1 scale) by employing
> the GENLIN procedure. However, before I show the code, let us recall the
> equation of the dichotomous Rasch model:
>
> logit(p_ij) = log[p_ij / (1 - p_ij)] = beta_j - theta_i
>
> where
>
> beta_j = ability of jth person
> theta_i = difficulty of ith item
>
> As you can see above, the dichotomous Rasch model is nothing more than the
> usual binary logistic regression, with the exception that the grand
> intercept is removed and the item difficulties are subtracted from the
> person abilities. With that in mind, it should be evident that the
> item difficulties produced by the second EMMEANS sub-command must be
> multiplied by -1.0.
>
> The simulation and GENLIN syntax is BELOW my name.
>
> Hope this is interesting to others. I've obviously left out a lot of
> details. Feel free to write back if you have any questions/comments.
>
> Ryan
> --
>
> SET SEED 873456542.
> NEW FILE.
> INPUT PROGRAM.
>
> COMPUTE person = -99.
> COMPUTE #person_ability = -99.
> COMPUTE item = -99.
> COMPUTE #item1_difficulty = -99.
> COMPUTE #item2_difficulty = -99.
> COMPUTE #item3_difficulty = -99.
> COMPUTE #item4_difficulty = -99.
> COMPUTE #item5_difficulty = -99.
> COMPUTE #item6_difficulty = -99.
> COMPUTE #item7_difficulty = -99.
> COMPUTE #item8_difficulty = -99.
> COMPUTE #item9_difficulty = -99.
> COMPUTE #item10_difficulty = -99.
> COMPUTE #item11_difficulty = -99.
> COMPUTE #item12_difficulty = -99.
> COMPUTE #item13_difficulty = -99.
> COMPUTE #item14_difficulty = -99.
> COMPUTE #item15_difficulty = -99.
> COMPUTE #eta = -99.
> COMPUTE #prob = -99.
> COMPUTE response = -99.
>
> LEAVE person to response.
>
> LOOP person = 1 to 1000.
> COMPUTE #person_ability = RV.NORMAL(0,1).
>
> LOOP item = 1 to 15.
> COMPUTE #item1_difficulty = LN(.100 / (1 - .100)).
> COMPUTE #item2_difficulty = LN(.150 / (1 - .150)).
> COMPUTE #item3_difficulty = LN(.200 / (1 - .200)).
> COMPUTE #item4_difficulty = LN(.250 / (1 - .250)).
> COMPUTE #item5_difficulty = LN(.300 / (1 - .300)).
> COMPUTE #item6_difficulty = LN(.350 / (1 - .350)).
> COMPUTE #item7_difficulty = LN(.400 / (1 - .400)).
> COMPUTE #item8_difficulty = LN(.450 / (1 - .450)).
> COMPUTE #item9_difficulty = LN(.475 / (1 - .475)).
> COMPUTE #item10_difficulty = LN(.525 / (1 - .525)).
> COMPUTE #item11_difficulty = LN(.550 / (1 - .550)).
> COMPUTE #item12_difficulty = LN(.600 / (1 - .600)).
> COMPUTE #item10_difficulty = LN(.650 / (1 - .650)).
> COMPUTE #item11_difficulty = LN(.700 / (1 - .700)).
> COMPUTE #item12_difficulty = LN(.750 / (1 - .750)).
> COMPUTE #item13_difficulty = LN(.800 / (1 - .800)).
> COMPUTE #item14_difficulty = LN(.850 / (1 - .850)).
> COMPUTE #item15_difficulty = LN(.900 / (1 - .900)).
>
> COMPUTE #eta =  (item=1)*(#person_ability - #item1_difficulty) +
>                             (item=2)*(#person_ability - #item2_difficulty)
> +
>                             (item=3)*(#person_ability - #item3_difficulty)
> +
>                             (item=4)*(#person_ability - #item4_difficulty)
> +
>                             (item=5)*(#person_ability - #item5_difficulty)
> +
>                             (item=6)*(#person_ability - #item6_difficulty)
> +
>                             (item=7)*(#person_ability - #item7_difficulty)
> +
>                             (item=8)*(#person_ability - #item8_difficulty)
> +
>                             (item=9)*(#person_ability - #item9_difficulty)
> +
>                             (item=10)*(#person_ability -
> #item10_difficulty) +
>                             (item=11)*(#person_ability -
> #item11_difficulty) +
>                             (item=12)*(#person_ability -
> #item12_difficulty) +
>                             (item=13)*(#person_ability -
> #item13_difficulty) +
>                             (item=14)*(#person_ability -
> #item14_difficulty) +
>                             (item=15)*(#person_ability -
> #item15_difficulty).
>
> COMPUTE #prob = 1 / (1 + EXP(-#eta)).
> COMPUTE response = RV.BERNOULLI(#prob).
>
> END CASE.
> END LOOP.
> END LOOP.
> END FILE.
> END INPUT PROGRAM.
> EXECUTE.
>
> *Remove persons with zeros or ones on all 15 items.
> AGGREGATE
>   /OUTFILE=* MODE=ADDVARIABLES
>   /BREAK=person
>   /response_sum=SUM(response).
>
> FILTER OFF.
> USE ALL.
> SELECT IF (response_sum~=0).
> EXECUTE.
> FILTER OFF.
> USE ALL.
> SELECT IF (response_sum~=15).
> EXECUTE.
>
> *Fit Dichotomous Rasch Model.
> GENLIN response (REFERENCE=FIRST) BY person item (ORDER=ASCENDING)
>   /MODEL person item INTERCEPT=NO
>  DISTRIBUTION=BINOMIAL LINK=LOGIT
>   /EMMEANS TABLES=person SCALE=TRANSFORMED
>   /EMMEANS TABLES=item SCALE=TRANSFORMED
>   /MISSING CLASSMISSING=EXCLUDE
>   /PRINT CPS DESCRIPTIVES MODELINFO FIT SUMMARY SOLUTION.





-----
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Dichotomous-Rasch-Model-tp5718543p5718544.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

Reply | Threaded
Open this post in threaded view
|

Re: Dichotomous Rasch Model

Art Kendall
perhaps
select if range(response_sum,1,17).

or
select if response_sum ne 0 and response_sum ne 18.


Art Kendall
Social Research Consultants
On 3/11/2013 6:14 AM, R B wrote:
FILTER OFF.
USE ALL.
SELECT IF (response_sum~=0).
 EXECUTE.
FILTER OFF.
USE ALL.
SELECT IF (response_sum~=18).
EXECUTE.

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Art Kendall
Social Research Consultants
Reply | Threaded
Open this post in threaded view
|

Re: Dichotomous Rasch Model

Bruce Weaver
Administrator
Here's a more general method for removing anyone who has the same score on all items.

* Remove persons with the same score on all 18 items.
* If the score is the same for all items, SD = 0.

AGGREGATE
  /OUTFILE=* MODE=ADDVARIABLES
  /BREAK=person
  /response_SD=SD(response).
 
SELECT IF (response_SD GT 0).
EXECUTE.


Art Kendall wrote
perhaps
        select if range(response_sum,1,17).
           
            or
              select if response_sum ne 0 and
                response_sum ne 18.
                 
                 
               
      Art Kendall
Social Research Consultants
      On 3/11/2013 6:14 AM, R B wrote:
   
   
      FILTER OFF.
        USE ALL.
        SELECT IF (response_sum~=0).
         EXECUTE.
      FILTER OFF.
        USE ALL.
        SELECT IF (response_sum~=18).
        EXECUTE.
   
   
 


=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Dichotomous Rasch Model

Ryan
Thanks, guys. The more efficient and general the code, the easier it will be for someone to use it in the future. I was thinking I would fit the same model on the same data using WINSTEPS (specialized Rasch software) to see the extent to which the estimated item difficulties are similar. I'll write back with the WINSTEPS item difficulty estimates  as soon as possible.
 
More soon.
 
Ryan
On Mon, Mar 11, 2013 at 8:47 AM, Bruce Weaver <[hidden email]> wrote:
Here's a more general method for removing anyone who has the same score on
all items.

* Remove persons with the same score on all 18 items.
* If the score is the same for all items, SD = 0.

AGGREGATE
  /OUTFILE=* MODE=ADDVARIABLES
  /BREAK=person
  /response_SD=SD(response).

SELECT IF (response_SD GT 0).
EXECUTE.



Art Kendall wrote
> perhaps
>         select if range(response_sum,1,17).
>
>             or
>               select if response_sum ne 0 and
>                 response_sum ne 18.
>
>
>
>       Art Kendall
> Social Research Consultants
>       On 3/11/2013 6:14 AM, R B wrote:
>
>
>       FILTER OFF.
>         USE ALL.
>         SELECT IF (response_sum~=0).
>          EXECUTE.
>       FILTER OFF.
>         USE ALL.
>         SELECT IF (response_sum~=18).
>         EXECUTE.
>
>
>
>
>
> =====================
> To manage your subscription to SPSSX-L, send a message to

> LISTSERV@.UGA

>  (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD





-----
--
Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

NOTE: My Hotmail account is not monitored regularly.
To send me an e-mail, please use the address shown above.

--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Dichotomous-Rasch-Model-tp5718543p5718548.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

Reply | Threaded
Open this post in threaded view
|

Re: Dichotomous Rasch Model

David Marso
Administrator
In reply to this post by Ryan
Hi Ryan,
I hope your condition improves.  I came down with a cold yesterday and took NyQuil at abut 4AM.  
Now my head still feels like crap.
--
FWIW: I just reran my code and it did not produce any errors.
COMMENTS:
My changes basically took the 18 instances of
COMPUTE #item1_difficulty = LN(.100 / (1 - .100)).
out of the 2 loops (calculate once rather than 18,000 times).  
After all, these are #scratch variables and percolate through!  
Speaking of which (no need to do any of the initialization code either).
The #eta computation adds 17 0's to a single non zero term (hence the VECTOR).
Also modified the naming convention (very awkward to use TO without numeric suffix).
---
Bruce provided nice way to eliminate 'perfect' responses (all 1 or no 1) using SD.
IIRC from studying Rasch models in the late 80s, one should remove also any 'perfect' items (all cases got an correct or incorrect).
But then one will possibly end up with cases who are 'perfect' on the remaining reduced item set.
Since this yields a situation where you end up in a loop (gack macro !DO time) without a known upper bound, using a sequence of aggregates seems reasonable (but clunky -where to stop?- exit macro shaking head sadly ( no way to do a valid test for !BREAK)).  Simpl solution for that is to reshape the data into NxP, pop into MATRIX and look at CSUM and RSUM functions within a loop to prune the respondent/item pool.
Reshape back to long and then run the GENLIN.

Ryan Black wrote
UPON REFLECTION, I see that I made an error (I simulated data for items 10,
11, and 12 twice!).

BELOW is the fixed code where data for all 18 items are simulated. Sorry,
David, but I'm still using my original code for this second post because I
received errors when trying to run yours. I'm sure it's a silly mistake on
my part. I will attempt your code again a little later today. I will blame
my previous error and inability to use David's more efficient code on my
flu, time of day, and lack of functioning brain cells, all of which are
interrelated, I hope :-) )

For now, here is the corrected code that produces data for the 18 items
that span enough of the logit scale for illustration purposes of fitting a
dichotomous Rasch model via GENLIN. Note that the code to remove
problematic cases had to be fixed as well (from "15" to "18").
Consequently, for those interested in this illustration, please use this
code; it includes all necessary code to carry out the entire simulation
experiment. Simply copy, paste, and run...).

*Generate data.
SET SEED 873456542.

NEW FILE.
INPUT PROGRAM.

COMPUTE person = -99.
COMPUTE #person_ability = -99.
COMPUTE item = -99.
COMPUTE #item1_difficulty = -99.
COMPUTE #item2_difficulty = -99.
COMPUTE #item3_difficulty = -99.
COMPUTE #item4_difficulty = -99.
COMPUTE #item5_difficulty = -99.
COMPUTE #item6_difficulty = -99.
COMPUTE #item7_difficulty = -99.
COMPUTE #item8_difficulty = -99.
COMPUTE #item9_difficulty = -99.
COMPUTE #item10_difficulty = -99.
COMPUTE #item11_difficulty = -99.
COMPUTE #item12_difficulty = -99.
COMPUTE #item13_difficulty = -99.
COMPUTE #item14_difficulty = -99.
COMPUTE #item15_difficulty = -99.
COMPUTE #item16_difficulty = -99.
COMPUTE #item17_difficulty = -99.
COMPUTE #item18_difficulty = -99.

COMPUTE #eta = -99.
COMPUTE #prob = -99.
COMPUTE response = -99.

LEAVE person to response.

LOOP person = 1 to 1000.
COMPUTE #person_ability = RV.NORMAL(0,1).

LOOP item = 1 to 18.
COMPUTE #item1_difficulty = LN(.100 / (1 - .100)).
COMPUTE #item2_difficulty = LN(.150 / (1 - .150)).
COMPUTE #item3_difficulty = LN(.200 / (1 - .200)).
COMPUTE #item4_difficulty = LN(.250 / (1 - .250)).
COMPUTE #item5_difficulty = LN(.300 / (1 - .300)).
COMPUTE #item6_difficulty = LN(.350 / (1 - .350)).
COMPUTE #item7_difficulty = LN(.400 / (1 - .400)).
COMPUTE #item8_difficulty = LN(.450 / (1 - .450)).
COMPUTE #item9_difficulty = LN(.475 / (1 - .475)).
COMPUTE #item10_difficulty = LN(.525 / (1 - .525)).
COMPUTE #item11_difficulty = LN(.550 / (1 - .550)).
COMPUTE #item12_difficulty = LN(.600 / (1 - .600)).
COMPUTE #item13_difficulty = LN(.650 / (1 - .650)).
COMPUTE #item14_difficulty = LN(.700 / (1 - .700)).
COMPUTE #item15_difficulty = LN(.750 / (1 - .750)).
COMPUTE #item16_difficulty = LN(.800 / (1 - .800)).
COMPUTE #item17_difficulty = LN(.850 / (1 - .850)).
COMPUTE #item18_difficulty = LN(.900 / (1 - .900)).

COMPUTE #eta =  (item=1)*(#person_ability - #item1_difficulty) +
                            (item=2)*(#person_ability - #item2_difficulty)
+
                            (item=3)*(#person_ability - #item3_difficulty)
+
                            (item=4)*(#person_ability - #item4_difficulty) +
                            (item=5)*(#person_ability - #item5_difficulty)
+
                            (item=6)*(#person_ability - #item6_difficulty)
+
                            (item=7)*(#person_ability - #item7_difficulty)
+
                            (item=8)*(#person_ability - #item8_difficulty) +
                            (item=9)*(#person_ability - #item9_difficulty)
+
                            (item=10)*(#person_ability -
#item10_difficulty) +
                            (item=11)*(#person_ability -
#item11_difficulty) +
                            (item=12)*(#person_ability -
#item12_difficulty) +
                            (item=13)*(#person_ability -
#item13_difficulty) +
                            (item=14)*(#person_ability -
#item14_difficulty) +
                            (item=15)*(#person_ability -
#item15_difficulty) +
                            (item=16)*(#person_ability -
#item16_difficulty) +
                            (item=17)*(#person_ability -
#item17_difficulty) +
                            (item=18)*(#person_ability -
#item18_difficulty).

COMPUTE #prob = 1 / (1 + EXP(-#eta)).
COMPUTE response = RV.BERNOULLI(#prob).
END CASE.
END LOOP.
END LOOP.
END FILE.
END INPUT PROGRAM.
EXECUTE.

 *Remove persons with zeros or ones on all 18 items.
AGGREGATE
  /OUTFILE=* MODE=ADDVARIABLES
  /BREAK=person
  /response_sum=SUM(response).

FILTER OFF.
USE ALL.
SELECT IF (response_sum~=0).
 EXECUTE.
FILTER OFF.
USE ALL.
SELECT IF (response_sum~=18).
EXECUTE.

*Fit Dichotomous Rasch Model.
GENLIN response (REFERENCE=FIRST) BY person item (ORDER=ASCENDING)
  /MODEL person item INTERCEPT=NO
  DISTRIBUTION=BINOMIAL LINK=LOGIT
  /EMMEANS TABLES=person SCALE=TRANSFORMED
  /EMMEANS TABLES=item SCALE=TRANSFORMED
  /MISSING CLASSMISSING=EXCLUDE
  /PRINT CPS DESCRIPTIVES MODELINFO FIT SUMMARY SOLUTION.
On Mon, Mar 11, 2013 at 4:39 AM, David Marso <[hidden email]> wrote:

> I believe that Input program can be compacted as follows.
> SET SEED 873456542.
> NEW FILE.
> INPUT PROGRAM.
> +  DO REPEAT
>       Diff= #item_difficulty_01 TO #item_difficulty_15
>         /X=.10 .15 .2 .25 .3 .35 .4 .45 .475 .65 .7 .75 .8 .85 .9 .
> +    COMPUTE Diff=LN(X/(1-X)).
> +  END REPEAT.
>
> +  LOOP person = 1 to 1000.
> +    COMPUTE #person_ability = RV.NORMAL(0,1).
> +    VECTOR item_diff=#item_difficulty_01 TO #item_difficulty_15.
> +    LOOP item = 1 to 15.
> +      COMPUTE #prob = 1 / (1 + EXP(-( #person_ability -
> item_diff(item)))).
> +      COMPUTE response = RV.BERNOULLI(#prob).
>        LEAVE PERSON.
> +      END CASE.
> +    END LOOP.
> +  END LOOP.
> +  END FILE.
> END INPUT PROGRAM.
> EXECUTE.
>
>
> Ryan Black wrote
> > Dear SPSS-L,
> >
> > On occasion, I have seen posts asking how to employ Rasch models in SPSS.
> > I
> > thought I might take this opportunity to demonstrate how one could fit a
> > dichotomous Rasch model (all items measured on a 0/1 scale) by employing
> > the GENLIN procedure. However, before I show the code, let us recall the
> > equation of the dichotomous Rasch model:
> >
> > logit(p_ij) = log[p_ij / (1 - p_ij)] = beta_j - theta_i
> >
> > where
> >
> > beta_j = ability of jth person
> > theta_i = difficulty of ith item
> >
> > As you can see above, the dichotomous Rasch model is nothing more than
> the
> > usual binary logistic regression, with the exception that the grand
> > intercept is removed and the item difficulties are subtracted from the
> > person abilities. With that in mind, it should be evident that the
> > item difficulties produced by the second EMMEANS sub-command must be
> > multiplied by -1.0.
> >
> > The simulation and GENLIN syntax is BELOW my name.
> >
> > Hope this is interesting to others. I've obviously left out a lot of
> > details. Feel free to write back if you have any questions/comments.
> >
> > Ryan
> > --
> >
> > SET SEED 873456542.
> > NEW FILE.
> > INPUT PROGRAM.
> >
> > COMPUTE person = -99.
> > COMPUTE #person_ability = -99.
> > COMPUTE item = -99.
> > COMPUTE #item1_difficulty = -99.
> > COMPUTE #item2_difficulty = -99.
> > COMPUTE #item3_difficulty = -99.
> > COMPUTE #item4_difficulty = -99.
> > COMPUTE #item5_difficulty = -99.
> > COMPUTE #item6_difficulty = -99.
> > COMPUTE #item7_difficulty = -99.
> > COMPUTE #item8_difficulty = -99.
> > COMPUTE #item9_difficulty = -99.
> > COMPUTE #item10_difficulty = -99.
> > COMPUTE #item11_difficulty = -99.
> > COMPUTE #item12_difficulty = -99.
> > COMPUTE #item13_difficulty = -99.
> > COMPUTE #item14_difficulty = -99.
> > COMPUTE #item15_difficulty = -99.
> > COMPUTE #eta = -99.
> > COMPUTE #prob = -99.
> > COMPUTE response = -99.
> >
> > LEAVE person to response.
> >
> > LOOP person = 1 to 1000.
> > COMPUTE #person_ability = RV.NORMAL(0,1).
> >
> > LOOP item = 1 to 15.
> > COMPUTE #item1_difficulty = LN(.100 / (1 - .100)).
> > COMPUTE #item2_difficulty = LN(.150 / (1 - .150)).
> > COMPUTE #item3_difficulty = LN(.200 / (1 - .200)).
> > COMPUTE #item4_difficulty = LN(.250 / (1 - .250)).
> > COMPUTE #item5_difficulty = LN(.300 / (1 - .300)).
> > COMPUTE #item6_difficulty = LN(.350 / (1 - .350)).
> > COMPUTE #item7_difficulty = LN(.400 / (1 - .400)).
> > COMPUTE #item8_difficulty = LN(.450 / (1 - .450)).
> > COMPUTE #item9_difficulty = LN(.475 / (1 - .475)).
> > COMPUTE #item10_difficulty = LN(.525 / (1 - .525)).
> > COMPUTE #item11_difficulty = LN(.550 / (1 - .550)).
> > COMPUTE #item12_difficulty = LN(.600 / (1 - .600)).
> > COMPUTE #item10_difficulty = LN(.650 / (1 - .650)).
> > COMPUTE #item11_difficulty = LN(.700 / (1 - .700)).
> > COMPUTE #item12_difficulty = LN(.750 / (1 - .750)).
> > COMPUTE #item13_difficulty = LN(.800 / (1 - .800)).
> > COMPUTE #item14_difficulty = LN(.850 / (1 - .850)).
> > COMPUTE #item15_difficulty = LN(.900 / (1 - .900)).
> >
> > COMPUTE #eta =  (item=1)*(#person_ability - #item1_difficulty) +
> >                             (item=2)*(#person_ability -
> #item2_difficulty)
> > +
> >                             (item=3)*(#person_ability -
> #item3_difficulty)
> > +
> >                             (item=4)*(#person_ability -
> #item4_difficulty)
> > +
> >                             (item=5)*(#person_ability -
> #item5_difficulty)
> > +
> >                             (item=6)*(#person_ability -
> #item6_difficulty)
> > +
> >                             (item=7)*(#person_ability -
> #item7_difficulty)
> > +
> >                             (item=8)*(#person_ability -
> #item8_difficulty)
> > +
> >                             (item=9)*(#person_ability -
> #item9_difficulty)
> > +
> >                             (item=10)*(#person_ability -
> > #item10_difficulty) +
> >                             (item=11)*(#person_ability -
> > #item11_difficulty) +
> >                             (item=12)*(#person_ability -
> > #item12_difficulty) +
> >                             (item=13)*(#person_ability -
> > #item13_difficulty) +
> >                             (item=14)*(#person_ability -
> > #item14_difficulty) +
> >                             (item=15)*(#person_ability -
> > #item15_difficulty).
> >
> > COMPUTE #prob = 1 / (1 + EXP(-#eta)).
> > COMPUTE response = RV.BERNOULLI(#prob).
> >
> > END CASE.
> > END LOOP.
> > END LOOP.
> > END FILE.
> > END INPUT PROGRAM.
> > EXECUTE.
> >
> > *Remove persons with zeros or ones on all 15 items.
> > AGGREGATE
> >   /OUTFILE=* MODE=ADDVARIABLES
> >   /BREAK=person
> >   /response_sum=SUM(response).
> >
> > FILTER OFF.
> > USE ALL.
> > SELECT IF (response_sum~=0).
> > EXECUTE.
> > FILTER OFF.
> > USE ALL.
> > SELECT IF (response_sum~=15).
> > EXECUTE.
> >
> > *Fit Dichotomous Rasch Model.
> > GENLIN response (REFERENCE=FIRST) BY person item (ORDER=ASCENDING)
> >   /MODEL person item INTERCEPT=NO
> >  DISTRIBUTION=BINOMIAL LINK=LOGIT
> >   /EMMEANS TABLES=person SCALE=TRANSFORMED
> >   /EMMEANS TABLES=item SCALE=TRANSFORMED
> >   /MISSING CLASSMISSING=EXCLUDE
> >   /PRINT CPS DESCRIPTIVES MODELINFO FIT SUMMARY SOLUTION.
>
>
>
>
>
> -----
> Please reply to the list and not to my personal email.
> Those desiring my consulting or training services please feel free to
> email me.
> --
> View this message in context:
> http://spssx-discussion.1045642.n5.nabble.com/Dichotomous-Rasch-Model-tp5718543p5718544.html
> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
Reply | Threaded
Open this post in threaded view
|

Re: Dichotomous Rasch Model

Greg Hurtz, Ph.D.
In reply to this post by Ryan
Wanted to contribute the following to the list discussion -- the attachment was rejected but here's the message. Ryan should have the attachment via separate email.


Ryan,
The GENLIN procedure actually does work well to estimate Rasch parameters -- we've compared it to jMetrik and BilogMG with very close parameter estimates, haven't yet compared it to WINSTEPS. Not sure if this list accepts attachments, but I'll try attaching a poster from a conference presentation last April (2012) that has graphs with the comparisons. If it doesn't come through I'm happy to share separately.

My setup of the GENLIN command is a little different than yours, specifying person logits as an "offset" variable to force a slope of 1, following Rasch tradition. Also, recently I've been computing the logit of percent "incorrect" rather than percent correct to avoid the need for post-multiplying the parameters by -1. Wright and Stone (1979) did the same.

Since I haven't compared to WINSTEPS, I'd be interested to hear what you find.

-Greg.


***********************************************************************************
Greg Hurtz, Ph.D.
Associate Professor
California State University, Sacramento
Research Specialties
Industrial Psychology, Statistical Methods, Measurement/Psychometrics
[hidden email] (Consulting/Research)
[hidden email] (Teaching/University Business)

***********************************************************************************


On Mon, Mar 11, 2013 at 11:00 AM, R B <[hidden email]> wrote:
Thanks, guys. The more efficient and general the code, the easier it will be for someone to use it in the future. I was thinking I would fit the same model on the same data using WINSTEPS (specialized Rasch software) to see the extent to which the estimated item difficulties are similar. I'll write back with the WINSTEPS item difficulty estimates  as soon as possible.
 
More soon.
 
Ryan
On Mon, Mar 11, 2013 at 8:47 AM, Bruce Weaver <[hidden email]> wrote:
Here's a more general method for removing anyone who has the same score on
all items.

* Remove persons with the same score on all 18 items.
* If the score is the same for all items, SD = 0.

AGGREGATE
  /OUTFILE=* MODE=ADDVARIABLES
  /BREAK=person
  /response_SD=SD(response).

SELECT IF (response_SD GT 0).
EXECUTE.



Art Kendall wrote
> perhaps
>         select if range(response_sum,1,17).
>
>             or
>               select if response_sum ne 0 and
>                 response_sum ne 18.
>
>
>
>       Art Kendall
> Social Research Consultants
>       On 3/11/2013 6:14 AM, R B wrote:
>
>
>       FILTER OFF.
>         USE ALL.
>         SELECT IF (response_sum~=0).
>          EXECUTE.
>       FILTER OFF.
>         USE ALL.
>         SELECT IF (response_sum~=18).
>         EXECUTE.
>
>
>
>
>
> =====================
> To manage your subscription to SPSSX-L, send a message to

> LISTSERV@.UGA

>  (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD





-----
--
Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

NOTE: My Hotmail account is not monitored regularly.
To send me an e-mail, please use the address shown above.

--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Dichotomous-Rasch-Model-tp5718543p5718548.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD




Reply | Threaded
Open this post in threaded view
|

Re: Dichotomous Rasch Model

Bruce Weaver
Administrator
Greg, attachments can be posted (and viewed) via http://spssx-discussion.1045642.n5.nabble.com/.

HTH.


Greg Hurtz, Ph.D. wrote
Wanted to contribute the following to the list discussion -- the attachment
was rejected but here's the message. Ryan should have the attachment via
separate email.

>
>
> Ryan,
> The GENLIN procedure actually does work well to estimate Rasch parameters
> -- we've compared it to jMetrik and BilogMG with very close parameter
> estimates, haven't yet compared it to WINSTEPS. Not sure if this list
> accepts attachments, but I'll try attaching a poster from a conference
> presentation last April (2012) that has graphs with the comparisons. If it
> doesn't come through I'm happy to share separately.
>
> My setup of the GENLIN command is a little different than yours,
> specifying person logits as an "offset" variable to force a slope of 1,
> following Rasch tradition. Also, recently I've been computing the logit of
> percent "incorrect" rather than percent correct to avoid the need for
> post-multiplying the parameters by -1. Wright and Stone (1979) did the same.
>
> Since I haven't compared to WINSTEPS, I'd be interested to hear what you
> find.
>
> -Greg.
>
>
>
> ***********************************************************************************
> Greg Hurtz, Ph.D.
> Associate Professor
> California State University, Sacramento
> *Research Specialties*:
> Industrial Psychology, Statistical Methods, Measurement/Psychometrics
> [hidden email] (Consulting/Research)
> [hidden email] (Teaching/University Business)
>
>
> ***********************************************************************************
>
>
> On Mon, Mar 11, 2013 at 11:00 AM, R B <[hidden email]> wrote:
>
>> Thanks, guys. The more efficient and general the code, the easier it will
>> be for someone to use it in the future. I was thinking I would fit the same
>> model on the same data using WINSTEPS (specialized Rasch software) to see
>> the extent to which the estimated item difficulties are similar. I'll write
>> back with the WINSTEPS item difficulty estimates  as soon as possible.
>>
>> More soon.
>>
>> Ryan
>> On Mon, Mar 11, 2013 at 8:47 AM, Bruce Weaver <[hidden email]>wrote:
>>
>>> Here's a more general method for removing anyone who has the same score
>>> on
>>> all items.
>>>
>>> * Remove persons with the same score on all 18 items.
>>> * If the score is the same for all items, SD = 0.
>>>
>>> AGGREGATE
>>>   /OUTFILE=* MODE=ADDVARIABLES
>>>   /BREAK=person
>>>   /response_SD=SD(response).
>>>
>>> SELECT IF (response_SD GT 0).
>>> EXECUTE.
>>>
>>>
>>>
>>> Art Kendall wrote
>>> > perhaps
>>> >         select if range(response_sum,1,17).
>>> >
>>> >             or
>>> >               select if response_sum ne 0 and
>>> >                 response_sum ne 18.
>>> >
>>> >
>>> >
>>> >       Art Kendall
>>> > Social Research Consultants
>>> >       On 3/11/2013 6:14 AM, R B wrote:
>>> >
>>> >
>>> >       FILTER OFF.
>>> >         USE ALL.
>>> >         SELECT IF (response_sum~=0).
>>> >          EXECUTE.
>>> >       FILTER OFF.
>>> >         USE ALL.
>>> >         SELECT IF (response_sum~=18).
>>> >         EXECUTE.
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > =====================
>>> > To manage your subscription to SPSSX-L, send a message to
>>>
>>> > LISTSERV@.UGA
>>>
>>> >  (not to SPSSX-L), with no body text except the
>>> > command. To leave the list, send the command
>>> > SIGNOFF SPSSX-L
>>> > For a list of commands to manage subscriptions, send the command
>>> > INFO REFCARD
>>>
>>>
>>>
>>>
>>>
>>> -----
>>> --
>>> Bruce Weaver
>>> [hidden email]
>>> http://sites.google.com/a/lakeheadu.ca/bweaver/
>>>
>>> "When all else fails, RTFM."
>>>
>>> NOTE: My Hotmail account is not monitored regularly.
>>> To send me an e-mail, please use the address shown above.
>>>
>>> --
>>> View this message in context:
>>> http://spssx-discussion.1045642.n5.nabble.com/Dichotomous-Rasch-Model-tp5718543p5718548.html
>>> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>>>
>>> =====================
>>> To manage your subscription to SPSSX-L, send a message to
>>> [hidden email] (not to SPSSX-L), with no body text except the
>>> command. To leave the list, send the command
>>> SIGNOFF SPSSX-L
>>> For a list of commands to manage subscriptions, send the command
>>> INFO REFCARD
>>>
>>
>>
>
>
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Dichotomous Rasch Model

Ryan
In reply to this post by Greg Hurtz, Ph.D.
Greg and others,
 
As stated previously, the standard dichotomous Rasch equation is as follows:
logit(p_ij) = log[p_ij / (1 - p_ij)] = beta_j - theta_i
where
beta_j = ability of jth person
theta_i = difficulty of ith item
 
It is true that some IRT software programs fit 1-parameter models and call them Rasch models despite the fact that the discrimination parameter, while constant across all items, is being estimated, and can result in a value not equal to 1.0. To repeat, software programs which fit 1-PL models sometimes estimate a discrimination parameter common to all items. The equation for a 1-PL model under such circumstances may be expressed as:
 
logit(p_ij) = log[p_ij / (1 - p_ij)] = alpha*(beta_j - theta_i)
where
beta_j = ability of jth person
theta_i = difficulty of ith item
alpha = estimated discrimination parameter common to all items
 
Again, note that alpha is being estimated in the equation above.
 
I would argue that the model I proposed in GENLIN is not estimating a discrimination parameter that is constant across all items. The model I proposed forces each item to have a discrimination (a.k.a. slope) equal to 1.0. I assume this because implicit in the Rasch equation is that alpha=1. That is, the standard dichotomous Rasch equation:
 
logit(p_ij) = log[p_ij / (1 - p_ij)] = beta_j - theta_i
 
is equivalent to this equation:
 
logit(p_ij) = log[p_ij / (1 - p_ij)] = 1*(beta_j - theta_i)
 
Consequently, the way in which I have parameterized the model in GENLIN, IMHO, is fitting a standard dichotomous Rasch model. If I were to fit the same model using the same estimation method and convergence criteria, where I explicitly forced alpha=1 such that it is a constant, I would expect to obtain the same results. (I could empirically test this assumption, although it's difficult to imagine that this is not the case, but I've made mistakes many times; i.e., last night when I initially wrote the simulation code!)
 
**Here are the mean centered item difficulties from SPSS compared to the [automatically] mean centered item difficulties from Winsteps:
 
SPSS Winsteps
-2.33 -2.33
-1.98 -1.97
-1.49 -1.49
-1.27 -1.27
-0.94 -0.94
-0.56 -0.56
-0.48 -0.48
-0.23 -0.23
-0.13 -0.13
+0.10 +0.10
+0.20 +0.20
+0.46 +0.46
+0.59 +0.59
+0.95 +0.95
+1.09 +1.08
+1.57 +1.56
+2.01 +2.01
+2.44 +2.44
 
As seen above, the estimates are essentially the same.
 
Ryan
 
On Mon, Mar 11, 2013 at 4:43 PM, Greg Hurtz, Ph.D. <[hidden email]> wrote:
Wanted to contribute the following to the list discussion -- the attachment was rejected but here's the message. Ryan should have the attachment via separate email.



Ryan,
The GENLIN procedure actually does work well to estimate Rasch parameters -- we've compared it to jMetrik and BilogMG with very close parameter estimates, haven't yet compared it to WINSTEPS. Not sure if this list accepts attachments, but I'll try attaching a poster from a conference presentation last April (2012) that has graphs with the comparisons. If it doesn't come through I'm happy to share separately.

My setup of the GENLIN command is a little different than yours, specifying person logits as an "offset" variable to force a slope of 1, following Rasch tradition. Also, recently I've been computing the logit of percent "incorrect" rather than percent correct to avoid the need for post-multiplying the parameters by -1. Wright and Stone (1979) did the same.

Since I haven't compared to WINSTEPS, I'd be interested to hear what you find.

-Greg.


***********************************************************************************
Greg Hurtz, Ph.D.
Associate Professor
California State University, Sacramento
Research Specialties
Industrial Psychology, Statistical Methods, Measurement/Psychometrics
[hidden email] (Consulting/Research)
[hidden email] (Teaching/University Business)

***********************************************************************************


On Mon, Mar 11, 2013 at 11:00 AM, R B <[hidden email]> wrote:
Thanks, guys. The more efficient and general the code, the easier it will be for someone to use it in the future. I was thinking I would fit the same model on the same data using WINSTEPS (specialized Rasch software) to see the extent to which the estimated item difficulties are similar. I'll write back with the WINSTEPS item difficulty estimates  as soon as possible.
 
More soon.
 
Ryan
On Mon, Mar 11, 2013 at 8:47 AM, Bruce Weaver <[hidden email]> wrote:
Here's a more general method for removing anyone who has the same score on
all items.

* Remove persons with the same score on all 18 items.
* If the score is the same for all items, SD = 0.

AGGREGATE
  /OUTFILE=* MODE=ADDVARIABLES
  /BREAK=person
  /response_SD=SD(response).

SELECT IF (response_SD GT 0).
EXECUTE.



Art Kendall wrote
> perhaps
>         select if range(response_sum,1,17).
>
>             or
>               select if response_sum ne 0 and
>                 response_sum ne 18.
>
>
>
>       Art Kendall
> Social Research Consultants
>       On 3/11/2013 6:14 AM, R B wrote:
>
>
>       FILTER OFF.
>         USE ALL.
>         SELECT IF (response_sum~=0).
>          EXECUTE.
>       FILTER OFF.
>         USE ALL.
>         SELECT IF (response_sum~=18).
>         EXECUTE.
>
>
>
>
>
> =====================
> To manage your subscription to SPSSX-L, send a message to

> LISTSERV@.UGA

>  (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD





-----
--
Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

NOTE: My Hotmail account is not monitored regularly.
To send me an e-mail, please use the address shown above.

--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Dichotomous-Rasch-Model-tp5718543p5718548.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD





Reply | Threaded
Open this post in threaded view
|

Re: Dichotomous Rasch Model

Ryan
In reply to this post by David Marso
David,

Thank you for the well wishes, and sorry to hear you aren't feeling well. It's that time of the year, I suppose.

Also, thank you for confirming the code is correct and providing clarification. I will incorporate it into my existing Rasch simulation experiment.

Best,

Ryan

On Mar 11, 2013, at 4:22 PM, David Marso <[hidden email]> wrote:

> Hi Ryan,
> I hope your condition improves.  I came down with a cold yesterday and took
> NyQuil at abut 4AM.
> Now my head still feels like crap.
> --
> FWIW: I just reran my code and it did not produce any errors.
> COMMENTS:
> My changes basically took the 18 instances of
> COMPUTE #item1_difficulty = LN(.100 / (1 - .100)).
> out of the 2 loops (calculate once rather than 18,000 times).
> After all, these are #scratch variables and percolate through!
> Speaking of which (no need to do any of the initialization code either).
> The #eta computation adds 17 0's to a single non zero term (hence the
> VECTOR).
> Also modified the naming convention (very awkward to use TO without numeric
> suffix).
> ---
> Bruce provided nice way to eliminate 'perfect' responses (all 1 or no 1)
> using SD.
> IIRC from studying Rasch models in the late 80s, one should remove also any
> 'perfect' items (all cases got an correct or incorrect).
> But then one will possibly end up with cases who are 'perfect' on the
> remaining reduced item set.
> Since this yields a situation where you end up in a loop (gack macro !DO
> time) without a known upper bound, using a sequence of aggregates seems
> reasonable (but clunky -where to stop?- exit macro shaking head sadly ( no
> way to do a valid test for !BREAK)).  Simpl solution for that is to reshape
> the data into NxP, pop into MATRIX and look at CSUM and RSUM functions
> within a loop to prune the respondent/item pool.
> Reshape back to long and then run the GENLIN.
>
>
> Ryan Black wrote
>> UPON REFLECTION, I see that I made an error (I simulated data for items
>> 10,
>> 11, and 12 twice!).
>>
>> BELOW is the fixed code where data for all 18 items are simulated. Sorry,
>> David, but I'm still using my original code for this second post because I
>> received errors when trying to run yours. I'm sure it's a silly mistake on
>> my part. I will attempt your code again a little later today. I will blame
>> my previous error and inability to use David's more efficient code on my
>> flu, time of day, and lack of functioning brain cells, all of which are
>> interrelated, I hope :-) )
>>
>> For now, here is the corrected code that produces data for the 18 items
>> that span enough of the logit scale for illustration purposes of fitting a
>> dichotomous Rasch model via GENLIN. Note that the code to remove
>> problematic cases had to be fixed as well (from "15" to "18").
>> Consequently, for those interested in this illustration, please use this
>> code; it includes all necessary code to carry out the entire simulation
>> experiment. Simply copy, paste, and run...).
>>
>> *Generate data.
>> SET SEED 873456542.
>>
>> NEW FILE.
>> INPUT PROGRAM.
>>
>> COMPUTE person = -99.
>> COMPUTE #person_ability = -99.
>> COMPUTE item = -99.
>> COMPUTE #item1_difficulty = -99.
>> COMPUTE #item2_difficulty = -99.
>> COMPUTE #item3_difficulty = -99.
>> COMPUTE #item4_difficulty = -99.
>> COMPUTE #item5_difficulty = -99.
>> COMPUTE #item6_difficulty = -99.
>> COMPUTE #item7_difficulty = -99.
>> COMPUTE #item8_difficulty = -99.
>> COMPUTE #item9_difficulty = -99.
>> COMPUTE #item10_difficulty = -99.
>> COMPUTE #item11_difficulty = -99.
>> COMPUTE #item12_difficulty = -99.
>> COMPUTE #item13_difficulty = -99.
>> COMPUTE #item14_difficulty = -99.
>> COMPUTE #item15_difficulty = -99.
>> COMPUTE #item16_difficulty = -99.
>> COMPUTE #item17_difficulty = -99.
>> COMPUTE #item18_difficulty = -99.
>>
>> COMPUTE #eta = -99.
>> COMPUTE #prob = -99.
>> COMPUTE response = -99.
>>
>> LEAVE person to response.
>>
>> LOOP person = 1 to 1000.
>> COMPUTE #person_ability = RV.NORMAL(0,1).
>>
>> LOOP item = 1 to 18.
>> COMPUTE #item1_difficulty = LN(.100 / (1 - .100)).
>> COMPUTE #item2_difficulty = LN(.150 / (1 - .150)).
>> COMPUTE #item3_difficulty = LN(.200 / (1 - .200)).
>> COMPUTE #item4_difficulty = LN(.250 / (1 - .250)).
>> COMPUTE #item5_difficulty = LN(.300 / (1 - .300)).
>> COMPUTE #item6_difficulty = LN(.350 / (1 - .350)).
>> COMPUTE #item7_difficulty = LN(.400 / (1 - .400)).
>> COMPUTE #item8_difficulty = LN(.450 / (1 - .450)).
>> COMPUTE #item9_difficulty = LN(.475 / (1 - .475)).
>> COMPUTE #item10_difficulty = LN(.525 / (1 - .525)).
>> COMPUTE #item11_difficulty = LN(.550 / (1 - .550)).
>> COMPUTE #item12_difficulty = LN(.600 / (1 - .600)).
>> COMPUTE #item13_difficulty = LN(.650 / (1 - .650)).
>> COMPUTE #item14_difficulty = LN(.700 / (1 - .700)).
>> COMPUTE #item15_difficulty = LN(.750 / (1 - .750)).
>> COMPUTE #item16_difficulty = LN(.800 / (1 - .800)).
>> COMPUTE #item17_difficulty = LN(.850 / (1 - .850)).
>> COMPUTE #item18_difficulty = LN(.900 / (1 - .900)).
>>
>> COMPUTE #eta =  (item=1)*(#person_ability - #item1_difficulty) +
>>                            (item=2)*(#person_ability - #item2_difficulty)
>> +
>>                            (item=3)*(#person_ability - #item3_difficulty)
>> +
>>                            (item=4)*(#person_ability - #item4_difficulty)
>> +
>>                            (item=5)*(#person_ability - #item5_difficulty)
>> +
>>                            (item=6)*(#person_ability - #item6_difficulty)
>> +
>>                            (item=7)*(#person_ability - #item7_difficulty)
>> +
>>                            (item=8)*(#person_ability - #item8_difficulty)
>> +
>>                            (item=9)*(#person_ability - #item9_difficulty)
>> +
>>                            (item=10)*(#person_ability -
>> #item10_difficulty) +
>>                            (item=11)*(#person_ability -
>> #item11_difficulty) +
>>                            (item=12)*(#person_ability -
>> #item12_difficulty) +
>>                            (item=13)*(#person_ability -
>> #item13_difficulty) +
>>                            (item=14)*(#person_ability -
>> #item14_difficulty) +
>>                            (item=15)*(#person_ability -
>> #item15_difficulty) +
>>                            (item=16)*(#person_ability -
>> #item16_difficulty) +
>>                            (item=17)*(#person_ability -
>> #item17_difficulty) +
>>                            (item=18)*(#person_ability -
>> #item18_difficulty).
>>
>> COMPUTE #prob = 1 / (1 + EXP(-#eta)).
>> COMPUTE response = RV.BERNOULLI(#prob).
>> END CASE.
>> END LOOP.
>> END LOOP.
>> END FILE.
>> END INPUT PROGRAM.
>> EXECUTE.
>>
>> *Remove persons with zeros or ones on all 18 items.
>> AGGREGATE
>>  /OUTFILE=* MODE=ADDVARIABLES
>>  /BREAK=person
>>  /response_sum=SUM(response).
>>
>> FILTER OFF.
>> USE ALL.
>> SELECT IF (response_sum~=0).
>> EXECUTE.
>> FILTER OFF.
>> USE ALL.
>> SELECT IF (response_sum~=18).
>> EXECUTE.
>>
>> *Fit Dichotomous Rasch Model.
>> GENLIN response (REFERENCE=FIRST) BY person item (ORDER=ASCENDING)
>>  /MODEL person item INTERCEPT=NO
>>  DISTRIBUTION=BINOMIAL LINK=LOGIT
>>  /EMMEANS TABLES=person SCALE=TRANSFORMED
>>  /EMMEANS TABLES=item SCALE=TRANSFORMED
>>  /MISSING CLASSMISSING=EXCLUDE
>>  /PRINT CPS DESCRIPTIVES MODELINFO FIT SUMMARY SOLUTION.
>> On Mon, Mar 11, 2013 at 4:39 AM, David Marso &lt;
>
>> david.marso@
>
>> &gt; wrote:
>>
>>> I believe that Input program can be compacted as follows.
>>> SET SEED 873456542.
>>> NEW FILE.
>>> INPUT PROGRAM.
>>> +  DO REPEAT
>>>      Diff= #item_difficulty_01 TO #item_difficulty_15
>>>        /X=.10 .15 .2 .25 .3 .35 .4 .45 .475 .65 .7 .75 .8 .85 .9 .
>>> +    COMPUTE Diff=LN(X/(1-X)).
>>> +  END REPEAT.
>>>
>>> +  LOOP person = 1 to 1000.
>>> +    COMPUTE #person_ability = RV.NORMAL(0,1).
>>> +    VECTOR item_diff=#item_difficulty_01 TO #item_difficulty_15.
>>> +    LOOP item = 1 to 15.
>>> +      COMPUTE #prob = 1 / (1 + EXP(-( #person_ability -
>>> item_diff(item)))).
>>> +      COMPUTE response = RV.BERNOULLI(#prob).
>>>       LEAVE PERSON.
>>> +      END CASE.
>>> +    END LOOP.
>>> +  END LOOP.
>>> +  END FILE.
>>> END INPUT PROGRAM.
>>> EXECUTE.
>>>
>>>
>>> Ryan Black wrote
>>>> Dear SPSS-L,
>>>>
>>>> On occasion, I have seen posts asking how to employ Rasch models in
>>> SPSS.
>>>> I
>>>> thought I might take this opportunity to demonstrate how one could fit
>>> a
>>>> dichotomous Rasch model (all items measured on a 0/1 scale) by
>>> employing
>>>> the GENLIN procedure. However, before I show the code, let us recall
>>> the
>>>> equation of the dichotomous Rasch model:
>>>>
>>>> logit(p_ij) = log[p_ij / (1 - p_ij)] = beta_j - theta_i
>>>>
>>>> where
>>>>
>>>> beta_j = ability of jth person
>>>> theta_i = difficulty of ith item
>>>>
>>>> As you can see above, the dichotomous Rasch model is nothing more than
>>> the
>>>> usual binary logistic regression, with the exception that the grand
>>>> intercept is removed and the item difficulties are subtracted from the
>>>> person abilities. With that in mind, it should be evident that the
>>>> item difficulties produced by the second EMMEANS sub-command must be
>>>> multiplied by -1.0.
>>>>
>>>> The simulation and GENLIN syntax is BELOW my name.
>>>>
>>>> Hope this is interesting to others. I've obviously left out a lot of
>>>> details. Feel free to write back if you have any questions/comments.
>>>>
>>>> Ryan
>>>> --
>>>>
>>>> SET SEED 873456542.
>>>> NEW FILE.
>>>> INPUT PROGRAM.
>>>>
>>>> COMPUTE person = -99.
>>>> COMPUTE #person_ability = -99.
>>>> COMPUTE item = -99.
>>>> COMPUTE #item1_difficulty = -99.
>>>> COMPUTE #item2_difficulty = -99.
>>>> COMPUTE #item3_difficulty = -99.
>>>> COMPUTE #item4_difficulty = -99.
>>>> COMPUTE #item5_difficulty = -99.
>>>> COMPUTE #item6_difficulty = -99.
>>>> COMPUTE #item7_difficulty = -99.
>>>> COMPUTE #item8_difficulty = -99.
>>>> COMPUTE #item9_difficulty = -99.
>>>> COMPUTE #item10_difficulty = -99.
>>>> COMPUTE #item11_difficulty = -99.
>>>> COMPUTE #item12_difficulty = -99.
>>>> COMPUTE #item13_difficulty = -99.
>>>> COMPUTE #item14_difficulty = -99.
>>>> COMPUTE #item15_difficulty = -99.
>>>> COMPUTE #eta = -99.
>>>> COMPUTE #prob = -99.
>>>> COMPUTE response = -99.
>>>>
>>>> LEAVE person to response.
>>>>
>>>> LOOP person = 1 to 1000.
>>>> COMPUTE #person_ability = RV.NORMAL(0,1).
>>>>
>>>> LOOP item = 1 to 15.
>>>> COMPUTE #item1_difficulty = LN(.100 / (1 - .100)).
>>>> COMPUTE #item2_difficulty = LN(.150 / (1 - .150)).
>>>> COMPUTE #item3_difficulty = LN(.200 / (1 - .200)).
>>>> COMPUTE #item4_difficulty = LN(.250 / (1 - .250)).
>>>> COMPUTE #item5_difficulty = LN(.300 / (1 - .300)).
>>>> COMPUTE #item6_difficulty = LN(.350 / (1 - .350)).
>>>> COMPUTE #item7_difficulty = LN(.400 / (1 - .400)).
>>>> COMPUTE #item8_difficulty = LN(.450 / (1 - .450)).
>>>> COMPUTE #item9_difficulty = LN(.475 / (1 - .475)).
>>>> COMPUTE #item10_difficulty = LN(.525 / (1 - .525)).
>>>> COMPUTE #item11_difficulty = LN(.550 / (1 - .550)).
>>>> COMPUTE #item12_difficulty = LN(.600 / (1 - .600)).
>>>> COMPUTE #item10_difficulty = LN(.650 / (1 - .650)).
>>>> COMPUTE #item11_difficulty = LN(.700 / (1 - .700)).
>>>> COMPUTE #item12_difficulty = LN(.750 / (1 - .750)).
>>>> COMPUTE #item13_difficulty = LN(.800 / (1 - .800)).
>>>> COMPUTE #item14_difficulty = LN(.850 / (1 - .850)).
>>>> COMPUTE #item15_difficulty = LN(.900 / (1 - .900)).
>>>>
>>>> COMPUTE #eta =  (item=1)*(#person_ability - #item1_difficulty) +
>>>>                            (item=2)*(#person_ability -
>>> #item2_difficulty)
>>>> +
>>>>                            (item=3)*(#person_ability -
>>> #item3_difficulty)
>>>> +
>>>>                            (item=4)*(#person_ability -
>>> #item4_difficulty)
>>>> +
>>>>                            (item=5)*(#person_ability -
>>> #item5_difficulty)
>>>> +
>>>>                            (item=6)*(#person_ability -
>>> #item6_difficulty)
>>>> +
>>>>                            (item=7)*(#person_ability -
>>> #item7_difficulty)
>>>> +
>>>>                            (item=8)*(#person_ability -
>>> #item8_difficulty)
>>>> +
>>>>                            (item=9)*(#person_ability -
>>> #item9_difficulty)
>>>> +
>>>>                            (item=10)*(#person_ability -
>>>> #item10_difficulty) +
>>>>                            (item=11)*(#person_ability -
>>>> #item11_difficulty) +
>>>>                            (item=12)*(#person_ability -
>>>> #item12_difficulty) +
>>>>                            (item=13)*(#person_ability -
>>>> #item13_difficulty) +
>>>>                            (item=14)*(#person_ability -
>>>> #item14_difficulty) +
>>>>                            (item=15)*(#person_ability -
>>>> #item15_difficulty).
>>>>
>>>> COMPUTE #prob = 1 / (1 + EXP(-#eta)).
>>>> COMPUTE response = RV.BERNOULLI(#prob).
>>>>
>>>> END CASE.
>>>> END LOOP.
>>>> END LOOP.
>>>> END FILE.
>>>> END INPUT PROGRAM.
>>>> EXECUTE.
>>>>
>>>> *Remove persons with zeros or ones on all 15 items.
>>>> AGGREGATE
>>>>  /OUTFILE=* MODE=ADDVARIABLES
>>>>  /BREAK=person
>>>>  /response_sum=SUM(response).
>>>>
>>>> FILTER OFF.
>>>> USE ALL.
>>>> SELECT IF (response_sum~=0).
>>>> EXECUTE.
>>>> FILTER OFF.
>>>> USE ALL.
>>>> SELECT IF (response_sum~=15).
>>>> EXECUTE.
>>>>
>>>> *Fit Dichotomous Rasch Model.
>>>> GENLIN response (REFERENCE=FIRST) BY person item (ORDER=ASCENDING)
>>>>  /MODEL person item INTERCEPT=NO
>>>> DISTRIBUTION=BINOMIAL LINK=LOGIT
>>>>  /EMMEANS TABLES=person SCALE=TRANSFORMED
>>>>  /EMMEANS TABLES=item SCALE=TRANSFORMED
>>>>  /MISSING CLASSMISSING=EXCLUDE
>>>>  /PRINT CPS DESCRIPTIVES MODELINFO FIT SUMMARY SOLUTION.
>>>
>>>
>>>
>>>
>>>
>>> -----
>>> Please reply to the list and not to my personal email.
>>> Those desiring my consulting or training services please feel free to
>>> email me.
>>> --
>>> View this message in context:
>>> http://spssx-discussion.1045642.n5.nabble.com/Dichotomous-Rasch-Model-tp5718543p5718544.html
>>> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>>>
>>> =====================
>>> To manage your subscription to SPSSX-L, send a message to
>
>> LISTSERV@.UGA
>
>> (not to SPSSX-L), with no body text except the
>>> command. To leave the list, send the command
>>> SIGNOFF SPSSX-L
>>> For a list of commands to manage subscriptions, send the command
>>> INFO REFCARD
>
>
>
>
>
> -----
> Please reply to the list and not to my personal email.
> Those desiring my consulting or training services please feel free to email me.
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Dichotomous-Rasch-Model-tp5718543p5718552.html
> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Dichotomous Rasch Model

Ryan
In reply to this post by Ryan
It should also be noted that the log-likelihood Chi-Squares and dfs are essentially the same as well:
 
SPSS Chi-Square: 17319.58, df=16762
Winsteps Chi-Square: 17319.60, df=16762
 
Best,
 
Ryan
On Mon, Mar 11, 2013 at 8:30 PM, R B <[hidden email]> wrote:
Greg and others,
 
As stated previously, the standard dichotomous Rasch equation is as follows:

logit(p_ij) = log[p_ij / (1 - p_ij)] = beta_j - theta_i
where
beta_j = ability of jth person
theta_i = difficulty of ith item
 
It is true that some IRT software programs fit 1-parameter models and call them Rasch models despite the fact that the discrimination parameter, while constant across all items, is being estimated, and can result in a value not equal to 1.0. To repeat, software programs which fit 1-PL models sometimes estimate a discrimination parameter common to all items. The equation for a 1-PL model under such circumstances may be expressed as:
 
logit(p_ij) = log[p_ij / (1 - p_ij)] = alpha*(beta_j - theta_i)
where
beta_j = ability of jth person
theta_i = difficulty of ith item
alpha = estimated discrimination parameter common to all items
 
Again, note that alpha is being estimated in the equation above.
 
I would argue that the model I proposed in GENLIN is not estimating a discrimination parameter that is constant across all items. The model I proposed forces each item to have a discrimination (a.k.a. slope) equal to 1.0. I assume this because implicit in the Rasch equation is that alpha=1. That is, the standard dichotomous Rasch equation:
 
logit(p_ij) = log[p_ij / (1 - p_ij)] = beta_j - theta_i
 
is equivalent to this equation:
 
logit(p_ij) = log[p_ij / (1 - p_ij)] = 1*(beta_j - theta_i)
 
Consequently, the way in which I have parameterized the model in GENLIN, IMHO, is fitting a standard dichotomous Rasch model. If I were to fit the same model using the same estimation method and convergence criteria, where I explicitly forced alpha=1 such that it is a constant, I would expect to obtain the same results. (I could empirically test this assumption, although it's difficult to imagine that this is not the case, but I've made mistakes many times; i.e., last night when I initially wrote the simulation code!)
 
**Here are the mean centered item difficulties from SPSS compared to the [automatically] mean centered item difficulties from Winsteps:
 
SPSS Winsteps
-2.33 -2.33
-1.98 -1.97
-1.49 -1.49
-1.27 -1.27
-0.94 -0.94
-0.56 -0.56
-0.48 -0.48
-0.23 -0.23
-0.13 -0.13
+0.10 +0.10
+0.20 +0.20
+0.46 +0.46
+0.59 +0.59
+0.95 +0.95
+1.09 +1.08
+1.57 +1.56
+2.01 +2.01
+2.44 +2.44
 
As seen above, the estimates are essentially the same.
 
Ryan
 
On Mon, Mar 11, 2013 at 4:43 PM, Greg Hurtz, Ph.D. <[hidden email]> wrote:
Wanted to contribute the following to the list discussion -- the attachment was rejected but here's the message. Ryan should have the attachment via separate email.



Ryan,
The GENLIN procedure actually does work well to estimate Rasch parameters -- we've compared it to jMetrik and BilogMG with very close parameter estimates, haven't yet compared it to WINSTEPS. Not sure if this list accepts attachments, but I'll try attaching a poster from a conference presentation last April (2012) that has graphs with the comparisons. If it doesn't come through I'm happy to share separately.

My setup of the GENLIN command is a little different than yours, specifying person logits as an "offset" variable to force a slope of 1, following Rasch tradition. Also, recently I've been computing the logit of percent "incorrect" rather than percent correct to avoid the need for post-multiplying the parameters by -1. Wright and Stone (1979) did the same.

Since I haven't compared to WINSTEPS, I'd be interested to hear what you find.

-Greg.


***********************************************************************************
Greg Hurtz, Ph.D.
Associate Professor
California State University, Sacramento
Research Specialties
Industrial Psychology, Statistical Methods, Measurement/Psychometrics
[hidden email] (Consulting/Research)
[hidden email] (Teaching/University Business)

***********************************************************************************


On Mon, Mar 11, 2013 at 11:00 AM, R B <[hidden email]> wrote:
Thanks, guys. The more efficient and general the code, the easier it will be for someone to use it in the future. I was thinking I would fit the same model on the same data using WINSTEPS (specialized Rasch software) to see the extent to which the estimated item difficulties are similar. I'll write back with the WINSTEPS item difficulty estimates  as soon as possible.
 
More soon.
 
Ryan
On Mon, Mar 11, 2013 at 8:47 AM, Bruce Weaver <[hidden email]> wrote:
Here's a more general method for removing anyone who has the same score on
all items.

* Remove persons with the same score on all 18 items.
* If the score is the same for all items, SD = 0.

AGGREGATE
  /OUTFILE=* MODE=ADDVARIABLES
  /BREAK=person
  /response_SD=SD(response).

SELECT IF (response_SD GT 0).
EXECUTE.



Art Kendall wrote
> perhaps
>         select if range(response_sum,1,17).
>
>             or
>               select if response_sum ne 0 and
>                 response_sum ne 18.
>
>
>
>       Art Kendall
> Social Research Consultants
>       On 3/11/2013 6:14 AM, R B wrote:
>
>
>       FILTER OFF.
>         USE ALL.
>         SELECT IF (response_sum~=0).
>          EXECUTE.
>       FILTER OFF.
>         USE ALL.
>         SELECT IF (response_sum~=18).
>         EXECUTE.
>
>
>
>
>
> =====================
> To manage your subscription to SPSSX-L, send a message to

> LISTSERV@.UGA

>  (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD





-----
--
Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

NOTE: My Hotmail account is not monitored regularly.
To send me an e-mail, please use the address shown above.

--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Dichotomous-Rasch-Model-tp5718543p5718548.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD