I am running a GLMM in spss with subject as a variable, repeated measures (and between groups variables) and random effects. I have tried running the model on a short version of the data set (only 6 participants out of 157), and this took several hours to complete. I have 16gb of RAM on my machine. How much is required to make these models run more quickly?
|
It would help if you sent your syntax and
perhaps described the data a little more.
From: msetfi <[hidden email]> To: [hidden email] Date: 01/12/2015 11:42 AM Subject: running GLMM models in SPSS memory requirements Sent by: "SPSSX(r) Discussion" <[hidden email]> I am running a GLMM in spss with subject as a variable, repeated measures (and between groups variables) and random effects. I have tried running the model on a short version of the data set (only 6 participants out of 157), and this took several hours to complete. I have 16gb of RAM on my machine. How much is required to make these models run more quickly? ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
I included data for 6 participants, for whom there are each 160 rows of data. The 160 rows of data constitute their trial by trial performance on a learning task on which they might be correct or not (TARGET=CORRECT1_ERROR0). Each participant has a unique ID code (SUBJECTS=NUM) and each trial on the learning task is a repeated measure (REPEATED_MEASURES=TRIAL_NUM). I also have between groups variables: pre_exp_1same and Context_1same and the model includes both random and fixed effects. The syntax is below, is this enough information? Thanks so much for the speedy reply!
DATASET ACTIVATE DataSet1. *Generalized Linear Mixed Models. GENLINMIXED /DATA_STRUCTURE SUBJECTS=NUM REPEATED_MEASURES=TRIAL_NUM COVARIANCE_TYPE=DIAGONAL /FIELDS TARGET=CORRECT1_ERROR0 TRIALS=NONE OFFSET=NONE /TARGET_OPTIONS DISTRIBUTION=BINOMIAL LINK=LOGIT /FIXED EFFECTS=Pre_exp_1same Context_1same TRIAL_NUM Pre_exp_1same*Context_1same Pre_exp_1same*TRIAL_NUM Context_1same*TRIAL_NUM Pre_exp_1same*Context_1same*TRIAL_NUM USE_INTERCEPT=TRUE /RANDOM EFFECTS=NUM NUM*TRIAL_NUM USE_INTERCEPT=TRUE COVARIANCE_TYPE=VARIANCE_COMPONENTS /BUILD_OPTIONS TARGET_CATEGORY_ORDER=ASCENDING INPUTS_CATEGORY_ORDER=ASCENDING MAX_ITERATIONS=100 CONFIDENCE_LEVEL=95 DF_METHOD=RESIDUAL COVB=MODEL /EMMEANS_OPTIONS SCALE=ORIGINAL PADJUST=LSD. |
Administrator
|
Perhaps try again without that 3 way interaction.
Also perhaps without the NUM*TRIAL_NUM? Just to see what happens? Without having your data in hand I don't see what advice people can provide. --
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me. --- "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis." Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?" |
Thanks David, I am basically trying to find out whether other people have had experience of running these types of models (with the data structure including subjects and repeated measures, as well as other effects) and whether these generally take a really long time to run and their memory requirements
|
Free forum by Nabble | Edit this page |