My responses are interspersed below.
On Thu, Dec 20, 2012 at 10:12 PM, E. Bernardo
<[hidden email]> wrote:
Dear Everyone,
We do a pretest-posttest analysis for a unidimentional scale with 10 items (say, Q1, Q2, ...,Q10). Using the pretest data, only four items (Q2, Q3, Q4, Q5) were significant, while using the posttest data six items(Q1, Q2, Q6, Q8, Q9, Q10) were significant.
You are providing conflicting information above. You state that you ran a pretest-posttest analysis for a "unidimensional scale with 10 items" which suggests to me that you derived a composite score to use as the dependent variable (e.g., you computed a sum or mean across all items for each subject). However, you go on to suggest that you performed pre-post analyses per item. Please clarify.
Noticed that the scale has a different set of items in the pretest and posttest scenario.
What do you mean that the scale has a different set of items during both measurement periods? Why?
Our question is, is the scale not reliable?
The reliability of composite scores on a measuring instrument and/or reliability on change scores are unrelated to what you've been discussing thus far, in my estimation.
Suppose we extend the scenario above to more than two measures
(say, Pretest, Posttest1, Posttest2, Posttest3, Postest4) so that we will use the Latent Growth Modeling(LGM) to model the latent change. Is it a requirement in LGM that all the scale indicators/items are significant across tests?
Technically, just because you have more than two measurement points does not make the analysis an LGC. You can have an LGC with only two points at which each subject was measured. Anyway, the answer to your final question is no.
Thank you for your inputs.
Eins