http://spssx-discussion.165.s1.nabble.com/Pretest-to-Posttest-A-question-of-reliability-tp5717078p5717080.html
You need to define what you speaking of when you say "significant."
What a "pretest-posttest analysis" evokes for me is something
that uses both pre and post. Thus, you would (for instance) look
at both (a) the correlation, pre-post and (b) the t-test, pre-post.
This expectation is so strong that I can't imagine what you are
looking at - in a single sample? - where you find items "significant"
at pre, taking it alone. And post, alone.
What are you testing? (If these are the only items that correlate
significantly with the corrected item-total, then you either have a
very tiny N or a rather disparate set of items making up a scale.)
- You can look at internal reliability by looking at the Cronbach alpha
from "Reliability". You can look at two aspects of pre-post reliability
by the correlation (for consistency) and a significant change (for
ability to measure a change, assuming that there should be a change).
--
Rich Ulrich
Date: Fri, 21 Dec 2012 11:12:00 +0800
From:
[hidden email]Subject: Pretest to Posttest: A question of reliability
To:
[hidden email]Dear Everyone,
We do a pretest-posttest analysis for a unidimentional scale with 10 items (say, Q1, Q2, ...,Q10). Using the pretest data, only four items (Q2, Q3, Q4, Q5) were significant, while using the posttest data six items(Q1, Q2, Q6, Q8, Q9, Q10) were significant. Noticed that the scale has a different set of items in the pretest and posttest scenario. Our question is, is the scale not reliable?
Suppose we extend the scenario above to more than two measures
(say, Pretest, Posttest1, Posttest2, Posttest3, Postest4) so that we will use the Latent Growth Modeling(LGM) to model the latent change. Is it a requirement in LGM that all the scale indicators/items are significant across tests?
Thank you for your inputs.
Eins