Login  Register

Re: Pretest to Posttest: A question of reliability

Posted by E. Bernardo on Dec 21, 2012; 2:35pm
URL: http://spssx-discussion.165.s1.nabble.com/Pretest-to-Posttest-A-question-of-reliability-tp5717078p5717085.html

Dear RB, Ulrich, et al.

Sorry for the poor English. 

Let me rephrase the scenario.  Our variable is latent with 10 items.  The ten-item five point likert type questionnaire were administered to a sample in two different occasions, F1 then F2.  So we have two correlated factors F1 and F2  and we want to treat them as latent variables using AMOS 20.  We expect that the 10 items are loaded significantly (p<.0.05) to F1 and then to F2.  However, the actual data showed that some items have insignificant (p>.05) factor loadings on F1 and F2; thus, the set of items that loaded to F1 are different from the set of items that loaded to F2.  Our question was: Can we proceed to correlate F1 and F2.  Is this not a measurement problem?

Eins


B <[hidden email]>
To: [hidden email]
Sent: Thursday, December 20, 2012 9:16 PM
Subject: Re: Pretest to Posttest: A question of reliability

My responses are interspersed below.

On Thu, Dec 20, 2012 at 10:12 PM, E. Bernardo <[hidden email]> wrote:
Dear Everyone,

We do a pretest-posttest analysis for a unidimentional scale with 10 items (say, Q1, Q2, ...,Q10).  Using the pretest data, only four items (Q2, Q3, Q4, Q5) were significant, while using the posttest data six items(Q1, Q2, Q6, Q8, Q9, Q10) were significant. 
 
You are providing conflicting information above. You state that you ran a pretest-posttest analysis for a "unidimensional scale with 10 items" which suggests to me that you derived a composite score to use as the dependent variable (e.g., you computed a sum or mean across all items for each subject). However, you go on  to suggest that you performed pre-post analyses per item. Please clarify.
 
Noticed that the scale has a different set of items in the pretest and posttest scenario. 
 
 What do you mean that the scale has a different set of items during both measurement periods? Why?
 
Our question is, is the scale not reliable?
 
The reliability of composite scores on a measuring instrument and/or reliability on change scores are unrelated to what you've been discussing thus far, in my estimation.
 
 

Suppose we extend the scenario above to more than two measures (say, Pretest, Posttest1, Posttest2, Posttest3, Postest4) so that we will use the Latent Growth Modeling(LGM) to model the latent change.  Is it a requirement in LGM that all the scale indicators/items are significant across tests?
 
Technically, just because you have more than two measurement points does not make the analysis an LGC. You can have an LGC with only two points at which each subject was measured. Anyway, the answer to your final question is no.
 

Thank you for your inputs.

Eins