Dear Everyone, We do a pretest-posttest analysis for a unidimentional scale with 10 items (say, Q1, Q2, ...,Q10). Using the pretest data, only four items (Q2, Q3, Q4, Q5) were significant, while using the posttest data six items(Q1, Q2, Q6, Q8, Q9, Q10) were significant. Noticed that the scale has a different set of items in the pretest and posttest scenario. Our question is, is the scale not reliable? Suppose we extend the scenario above to more than two measures
(say, Pretest, Posttest1, Posttest2, Posttest3, Postest4) so that we will use the Latent Growth Modeling(LGM) to model the latent change. Is it a requirement in LGM that all the scale indicators/items are significant across tests? Thank you for your inputs. Eins |
Jaspal is on vacation from December 21, 2012 to January 1, 2013 inclusive. Please contact Shabari Phadkar at 416-327-4856 for assistance. Thanks.
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
In reply to this post by E. Bernardo
You need to define what you speaking of when you say "significant."
What a "pretest-posttest analysis" evokes for me is something that uses both pre and post. Thus, you would (for instance) look at both (a) the correlation, pre-post and (b) the t-test, pre-post. This expectation is so strong that I can't imagine what you are looking at - in a single sample? - where you find items "significant" at pre, taking it alone. And post, alone. What are you testing? (If these are the only items that correlate significantly with the corrected item-total, then you either have a very tiny N or a rather disparate set of items making up a scale.) - You can look at internal reliability by looking at the Cronbach alpha from "Reliability". You can look at two aspects of pre-post reliability by the correlation (for consistency) and a significant change (for ability to measure a change, assuming that there should be a change). -- Rich Ulrich Date: Fri, 21 Dec 2012 11:12:00 +0800 From: [hidden email] Subject: Pretest to Posttest: A question of reliability To: [hidden email] Dear Everyone, We do a pretest-posttest analysis for a unidimentional scale with 10 items (say, Q1, Q2, ...,Q10). Using the pretest data, only four items (Q2, Q3, Q4, Q5) were significant, while using the posttest data six items(Q1, Q2, Q6, Q8, Q9, Q10) were significant. Noticed that the scale has a different set of items in the pretest and posttest scenario. Our question is, is the scale not reliable? Suppose we extend the scenario above to more than two measures
(say, Pretest, Posttest1, Posttest2, Posttest3, Postest4) so that we will use the Latent Growth Modeling(LGM) to model the latent change. Is it a requirement in LGM that all the scale indicators/items are significant across tests? Thank you for your inputs. Eins |
In reply to this post by E. Bernardo
My responses are interspersed below.
On Thu, Dec 20, 2012 at 10:12 PM, E. Bernardo <[hidden email]> wrote:
You are providing conflicting information above. You state that you ran a pretest-posttest analysis for a "unidimensional scale with 10 items" which suggests to me that you derived a composite score to use as the dependent variable (e.g., you computed a sum or mean across all items for each subject). However, you go on to suggest that you performed pre-post analyses per item. Please clarify.
What do you mean that the scale has a different set of items during both measurement periods? Why?
The reliability of composite scores on a measuring instrument and/or reliability on change scores are unrelated to what you've been discussing thus far, in my estimation.
Technically, just because you have more than two measurement points does not make the analysis an LGC. You can have an LGC with only two points at which each subject was measured. Anyway, the answer to your final question is no.
|
I am currently on leave and will return on Monday January 7th. I will respond to your email as soon as possible after my return.
Thank you. |
In reply to this post by Ryan
I will be out of the office until December 26th, with limited access to e-mail.
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
In reply to this post by Ryan
Dear RB, Ulrich, et al. Sorry for the poor English. Let me rephrase the scenario. Our variable is latent with 10
items. The ten-item five point likert type questionnaire were administered to a sample in two different occasions, F1 then F2. So we have two correlated factors F1 and F2 and we want to treat them as latent variables using AMOS 20. We expect that the 10 items are loaded significantly (p<.0.05) to F1 and then to F2. However, the actual data showed that some items have insignificant (p>.05) factor loadings on F1 and F2; thus, the set of items that loaded to F1 are different from the set of items that loaded to F2. Our question was: Can we proceed to correlate F1 and F2. Is this not a measurement problem? Eins B <[hidden email]> To: [hidden email] Sent: Thursday, December 20, 2012 9:16 PM Subject: Re: Pretest to Posttest: A question of reliability My responses are interspersed below. On Thu, Dec 20, 2012 at 10:12 PM, E. Bernardo <[hidden email]> wrote:
You are providing conflicting information above. You state that you ran a pretest-posttest analysis for a "unidimensional scale with 10 items" which suggests to me that you derived a composite score to use as the dependent variable (e.g., you computed a sum or mean across all items for each subject). However, you go on to suggest that you performed pre-post analyses per item. Please clarify.
What do you mean that the scale has a different set of items during both measurement periods? Why?
The reliability of composite scores on a measuring instrument and/or reliability on change scores are unrelated to what you've been discussing thus far, in my estimation.
Technically, just because you have more than two measurement points does not make the analysis an LGC. You can have an LGC with only two points at which each subject was measured. Anyway, the answer to your final question is no.
|
Thank you for your email. I will be on annual leave from the 21/12/2012 until 03/01/2013. During this time I will only be checking mail infrequently.
|
In reply to this post by E. Bernardo
I am out of the office until 7th January 2013. Please contact my colleagues Niall or Stephen if you need assistance. |
In reply to this post by E. Bernardo
I will be out of the office from December 20, 2012 to January 2, 2013. If you need immediate assistance please call 617-287-5541. |
In reply to this post by E. Bernardo
Jeg er på ferie
Jeg er tilbage onsdag d. 2. januar 2013. Med venlig hilsen Peter Løvgreen ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
In reply to this post by E. Bernardo
Besten Dank für Ihr Mail. Ich bin bis am 11. Februar 2013 nur an einzelnen Tagen anwesend und werde den Posteingang nur sporadisch überprüfen.
Mit freundlichen Grüssen
|
In reply to this post by E. Bernardo
"... proceed to correlate F1 and F2"? "... measurement problem"?
Please read my previous post more closely -- You should START by correlating F1 and F2, to see if there is any consistency (Reliability of one sort) between the two times. As I said. IS there is a "measurement problem" -- which I would more likely call a "scale development problem"? If so, what it affects is what you can say about your results so far. What is the narrative going to be? Is this to be regarded as a pilot study, which mainly reveals problems in the scale? You say that you have looked at which items are "loaded significantly." Okay. I haven't used AMOS, but I assume that this is what I referred to, explicitly, as testing the item vs corrected item-total correlations. But "significant" is not an effect size or measure of association... and that is why I said, also explicitly, that the lack could indicate tiny N or disparate items. You have yet to mention the N - Even small correlations could be "significant" if the N is large enough. Chronbach's alpha is a measure using the average correlation, and it tells you something about how well the items do hang together. The procedure will also give you "alpha if item is deleted". If that one goes UP for some item, then you know that this item is hurting the internal consistency of the set of items. A composite score does not *have* to have similar, correlated items in order to be useful. But you do call this one a latent factor. And you do describe it as comprising 10 likert items: I don't expect that 10 items on a symmetric scale of "How much do you agree" would be used without some notion of latent factors. However, an important aspect of "reliability" is that it is always "reliability in THIS sample." That implies, for instance, that you expect higher measured reliability when the sample is diverse on the latent factor; and lower when the sample is pre-selected as being similar. - Thus, for data I have used, I expect a stronger factor structure among patients' symptoms at PRE, and a weaker structure at POST, corresponding to the greater/fewer symptoms present. - By similar logic, for a long, active educational intervention on a classroom, I might expect more structure at POST, after the intervention shows an effect. -- Rich Ulrich Date: Fri, 21 Dec 2012 22:35:47 +0800 From: [hidden email] Subject: Re: Pretest to Posttest: A question of reliability To: [hidden email] Dear RB, Ulrich, et al. Sorry for the poor English. Let me rephrase the scenario. Our variable is latent with 10
items. The ten-item five point likert type questionnaire were administered to a sample in two different occasions, F1 then F2. So we have two correlated factors F1 and F2 and we want to treat them as latent variables using AMOS 20. We expect that the 10 items are loaded significantly (p<.0.05) to F1 and then to F2. However, the actual data showed that some items have insignificant (p>.05) factor loadings on F1 and F2; thus, the set of items that loaded to F1 are different from the set of items that loaded to F2. Our question was: Can we proceed to correlate F1 and F2. Is this not a measurement problem? Eins |
I will be out of the office until Wednesday, January 2nd. I will respond to your e-mail as soon as possible on my return. |
In reply to this post by Rich Ulrich
I will be out of office until Thursday, January 3, 2012, with limited access to email. I will respond to your email when I return.
|
In reply to this post by E. Bernardo
Eins, Reliability = true score variance / observed score variance where observed score variance = true score variance + error score variance Within a one-factor confirmatory factor analytic modeling framework, you can estimate true score variance and error score variance as follows: estimated true score variance = [sum(factor loadings)]^2
estimated error score variance = sum(error variances) + 2*[sum(error covariances)] estimated reliability = estimated true score variance / (estimated true score variance + estimated error score variance)
The formula above employed on data from a single testing occasion will yield a more accurate estimate of composite score reliability than Cronbach's alpha. Reference: Brown TA (2006). Confirmatory factor analysis for applied research; Kenny DA, editor. New York: The Guilford Press.
Estimating test-retest reliability using a structural equation model is another matter for another time. Ryan On Fri, Dec 21, 2012 at 9:35 AM, E. Bernardo <[hidden email]> wrote:
|
Ryan
Do you have an example set of syntax to do this? Did you use OMS? Art Kendall Social Research ConsultantsOn 12/21/2012 11:38 PM, R B wrote:
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Art Kendall
Social Research Consultants |
In reply to this post by Ryan
Ryan (and others),
What you are describing must be true of the Latent factor itself instead of being true about the pragmatic total score. That should be appropriate, in the end, for using structural equations for the whole problem; and the user does say that he is using AMOS for modeling. (I'm afraid that I have never been impressed by structural equations approach, in general. Perhaps I should have mentioned that I was ignoring that, to deal with the more basic "scale development" problem.) In any case -- if I had doubts about the success of devising my likert scale, then I would certainly start with the simple approach of examining the success of the likert scale as a total score. -- Rich Ulrich Date: Fri, 21 Dec 2012 23:38:27 -0500 From: [hidden email] Subject: Re: Pretest to Posttest: A question of reliability To: [hidden email] Eins, Reliability = true score variance / observed score variance where observed score variance = true score variance + error score variance Within a one-factor confirmatory factor analytic modeling framework, you can estimate true score variance and error score variance as follows: estimated true score variance = [sum(factor loadings)]^2
estimated error score variance = sum(error variances) + 2*[sum(error covariances)] estimated reliability = estimated true score variance / (estimated true score variance + estimated error score variance)
The formula above employed on data from a single testing occasion will yield a more accurate estimate of composite score reliability than Cronbach's alpha. Reference: Brown TA (2006). Confirmatory factor analysis for applied research; Kenny DA, editor. New York: The Guilford Press.
Estimating test-retest reliability using a structural equation model is another matter for another time. Ryan On Fri, Dec 21, 2012 at 9:35 AM, E. Bernardo <[hidden email]> wrote:
|
Rich, The (coefficient omega) formula I provided previously is a way to estimate total-score reliability just as much as Cronbach's alpha is a method to estimate total-score reliability. The difference is the model under which total-score reliability is being estimated (confimatory factor analytic model versus "essentially tau-equivalent model").
Cronbach's alpha is based on the "essentially tau equivalent model." The "essentially tau equivalent model" assumes that all items have equal loadings on one and only one factor, each of which has a unique variance composed entirely of error. Moreover, the errors of any pairs of items are assumed to be uncorrelated. The extent to which these assumptions are violated (e.g., unequal factor loadings, correlated errors), the more Cronbach's alpha will mis-estimate reliability.
If one constructs an "essentially tau equivalent model" using an SEM program (e.g., AMOS) and calculates coefficient omega, the reliability estimate will equal Cronbach's alpha. However, if this model fits the data poorly, then one would likely consider employing a less restrictive model (CFA) and re-estimate total-score reliability.
Relaxing the assumptions of the "essentially tau equivalent model" may change the estimate of reliability of total scores. However, relaxing the assumptions will not change the psychometric property being estimated. Instead, it allows one to obtain a more accurate estimate of total-score reliability under a model which yields a superior fit to the data.
Ryan On Sat, Dec 22, 2012 at 4:36 PM, Rich Ulrich <[hidden email]> wrote:
|
In reply to this post by Art Kendall
Art, This post is filled with commentary in-between code. Please let me kow if there is any part of my response which requires clarification. (For those not intererested in this discussion around estimating reliability [defined as true score variance / observed score variance] via structural equation modeling, I suggest you stop reading the post now.)
Let's begin by creating data (means, SDs, and rhos) for 8 items (x1, x2, ..., x8) based on an artificial sample of N=500 using the following SPSS code:
MATRIX DATA VARIABLES=ROWTYPE_ x1 x2 x3 x4 x5 x6 x7 x8. BEGIN DATA N 500 500 500 500 500 500 500 500 MEAN .017 .040 .022 .056 -.019 -.023 .033 .020 SD 1.032 1.032 1.038 1.028 .946 .985 .993 1.070
CORR 1.000 CORR .276 1.000 CORR .255 .332 1.000 CORR .222 .<a href="tel:213%20.552%201.000" target="_blank" value="+12135521000">213 .552 1.000 CORR .255 .211 .232 .213 1.000
CORR .192 .235 .240 .182 .269 1.000 CORR .221 .262 .277 .217 .202 .531 1.000
CORR .555 .275 .275 .231 .169 .184 .169 1.000 END DATA. Next, let's estimate reliability by calculating Cronbach's alpha using SPSS code: RELIABILITY
/VARIABLES=X1 X2 X3 X4 X5 X6 X7 X8 /SCALE('ALL VARIABLES') ALL /MODEL=ALPHA /STATISTICS=CORR /SUMMARY=TOTAL /MATRIX IN(*).
After running the RELIABILITY procedure above, you should obtain a Cronbach's alpha coefficient = .7441720125. As I mentioned in a previous post, Cronbach's alpha is based on the "essentially tau-equivalent model", and as a result, can be calculated by employing a single-factor confirmatory factor analysis where the factor loadings are constrained to be equal and the errors are estimated freely and assumed to be independent using the unweighted least squares estimation method (which is analogous to OLS) via the following AMOS code:
#Region "Header" Imports System Imports System.Diagnostics Imports Microsoft.VisualBasic Imports AmosEngineLib Imports AmosGraphics Imports AmosEngineLib.AmosEngine.TMatrixID
Imports PBayes #End Region Module MainModule Public Sub Main() Dim Sem As AmosEngine Sem = New AmosEngine Sem.TextOutput AnalysisProperties(Sem)
ModelSpecification(Sem) Sem.FitAllModels() Sem.Dispose() End Sub Sub ModelSpecification(Sem As AmosEngine) Sem.GenerateDefaultCovariances(False)
Sem.BeginGroup("C:\<specify path>\reliability_example.sav", "reliability_example") Sem.GroupName("Group number 1") Sem.AStructure("x4 = (Loading) Factor + (1) err4")
Sem.AStructure("x3 = (Loading) Factor + (1) err3") Sem.AStructure("x2 = (Loading) Factor + (1) err2") Sem.AStructure("x1 = (Loading) Factor + (1) err1")
Sem.AStructure("x5 = (Loading) Factor + (1) err5") Sem.AStructure("x6 = (Loading) Factor + (1) err6") Sem.AStructure("x7 = (Loading) Factor + (1) err7")
Sem.AStructure("x8 = (Loading) Factor + (1) err8") Sem.AStructure("Factor (1)") Sem.Model("Default model", "") End Sub
Sub AnalysisProperties(Sem As AmosEngine) Sem.Uls Sem.Iterations(50) Sem.InputUnbiasedMoments Sem.FitMLMoments Sem.Standardized Sem.Mods( 10)
Sem.Seed(1) End Sub End Module We can take the constrained factor loadings and error variances to calculate Cronbach's alpha in SPSS as follows:
**COMPUTE:.
compute Rxx_ess_tau_equiv_model = (0.524194608262145*8)**2 / ((0.524194608262145*8)**2 + (.788113964675750 + .788113964675750 + .800509124675750 + .779890444675750 + .618346180675750 + .693504562675750 + .709296914675750 + .867830212675749) + 2*(0)). execute. After running the code above, you will obtain an estimate of reliability that is equal to Cronbach's alpha coefficient of .7441720125. (It should be noted that one could have estimated reliability directly within AMOS by employing a user-defined estimand.)
The next question, however, is whether there is a way to obtain a more accurate estimate of reliability. Answering this question requires that we fit the same CFA model using the maximum likelihood estimation method to obtain a Chi-Square statistic and other fit indices, followed by modifications to the model based on both statistical and substantive reasons. First, let's tackle the statistical reason for re-parameterizing the model successively until we have achieved a model with a superior fit:
1. Baseline Model (equal factor loadings and indpenedent error variances): Chi-Square (df=27)=276.555, p<.001, GFI=.880, CFI=.689, RMSEA=.136, Rxx = .743 Again, we calculate reliability using the equation used previously:
compute Rxx_baseline_MLE = (.523268189314732*8)**2 / ((.523268189314732*8)**2 + (.762278689905565 + .796470532118462 + .723442742446598 + .780165299728391 + .713850584087045 + .709682946567755 +.711514761829753 + .850962087548053 ) + 2*(0)). execute. /*Intermediary Post-Hoc Models*/
2. Model with unequal factor loadings: Chi-Square (df=20)=260.078, p<.001, GFI=.885, CFI=.701, RMSEA=.155, Rxx = .746 Again, we calculate reliability using the same equation:
compute true_score_variance_unequal_loadings_MLE = (0.556735618112637 + 0.513993209453236 + 0.648557304729973 + 0.555545309689387 + 0.390226949410027 + 0.475149789170106 + 0.499079633791371 + 0.556659600195888)**2. compute error_score_variance_unequal_loadings_MLE = .752939403578407 + .798704932657766 + .654662520799237 + .746039841051326 + .740849095967666 +.742517227880893 +.734996421160121 + .832740289549891 + 2*(0). compute observed_score_variance_unequal_loadings_MLE = true_score_variance_unequal_loadings_MLE + error_score_variance_unequal_loadings_MLE. compute Rxx_unequal_loadings_MLE = true_score_variance_unequal_loadings_MLE / observed_score_variance_unequal_loadings_MLE. execute. 3. Model with unequal factor loadings and error cov(1,8): Chi-Square (df=19)=159.068, p<.001, GFI=.923, CFI=.825, RMSEA=.122
4. Model with unequal factor loadings and error cov(1,8),cov(6,7): Chi-Square (df=18)=62.859, p<.001, GFI=.966, CFI=.944, RMSEA=.071 /*Final Model*/
5. Model with unequal factor loadings and error cov(1,8),cov(6,7),cov(3,4): Chi-Square (df=17)=18.196, p=.377, GFI=.991, CFI=.999, RMSEA=.012, Rxx =.648
Finally, we calculate reliability from the best fitting model as follows:
compute true_score_variance_final_model_MLE=(0.504038022382608 + 0.567074781382506 + 0.590454496251624 + 0.453948802985055 + 0.417199949916064 + 0.437615415101698 + 0.459268052742018 + 0.484263617778839)**2. The models stated above are nested, allowing us to test if there is a significant improvement in the model according to a likelihood ratio test. But as most data analysts would agree, making such modifications should be considered post-hoc and should not solely be statistically driven, but also substantively driven. With that in mind, suppose the unidimensional construct is depression, and that: (a) items 1 and 8 are self-dislike and self-critisicm, respectively; (b) items 6 and 7 are loss of pleasure and loss of interest, respectively; and (c) items 3 and 4 are fatigue and loss of energy, respectively. With this in mind, would we be surprised to observe that these pairs of items covary above and beyond that which would be expected given the unidimensional construct the items are intended to measure? Certainly not! Therefore, given the improvement in model-to-data fit as well as a rationale for allowing covarying errors (shared item content), a test developer may very well be comfortable using the reliability estimate from the final model.
The AMOS code for the final model is presented below: #Region "Header"
Imports System Imports System.Diagnostics Imports Microsoft.VisualBasic Imports AmosEngineLib Imports AmosGraphics Imports AmosEngineLib.AmosEngine.TMatrixID Imports PBayes #End Region Module MainModule Public Sub Main() Dim Sem As AmosEngine Sem = New AmosEngine Sem.TextOutput AnalysisProperties(Sem) ModelSpecification(Sem) Sem.FitAllModels() Sem.Dispose() End Sub Sub ModelSpecification(Sem As AmosEngine) Sem.GenerateDefaultCovariances(False) Sem.BeginGroup("C:\<specify path>\reliability_example.sav", "reliability_example") Sem.GroupName("Group number 1") Sem.AStructure("x4 = Factor + (1) err4") Sem.AStructure("x3 = Factor + (1) err3") Sem.AStructure("x2 = Factor + (1) err2") Sem.AStructure("x1 = Factor + (1) err1") Sem.AStructure("x5 = Factor + (1) err5") Sem.AStructure("x6 = Factor + (1) err6") Sem.AStructure("x7 = Factor + (1) err7") Sem.AStructure("x8 = Factor + (1) err8") Sem.AStructure("err1 <--> err8") Sem.AStructure("err6 <--> err7") Sem.AStructure("err3 <--> err4") Sem.AStructure("Factor (1)") Sem.Model("Default model", "") End Sub Sub AnalysisProperties(Sem As AmosEngine) Sem.Iterations(50) Sem.InputUnbiasedMoments Sem.FitMLMoments Sem.Standardized Sem.Mods( 10) Sem.Seed(1) End Sub End Module Ryan
On Sat, Dec 22, 2012 at 6:45 AM, Art Kendall <[hidden email]> wrote: > > Ryan > Do you have an example set of syntax to do this? Did you use OMS?
> > Art Kendall > Social Research Consultants > > On 12/21/2012 11:38 PM, R B wrote: > > Eins, > Reliability = true score variance / observed score variance
> where > observed score variance = true score variance + error score variance > Within a one-factor confirmatory factor analytic modeling framework, you can estimate true score variance and error score variance as follows:
> estimated true score variance = [sum(factor loadings)]^2 > estimated error score variance = sum(error variances) + 2*[sum(error covariances)] > estimated reliability = estimated true score variance / (estimated true score variance + estimated error score variance)
> The formula above employed on data from a single testing occasion will yield a more accurate estimate of composite score reliability than Cronbach's alpha. > > Reference: Brown TA (2006). Confirmatory factor analysis for applied research; Kenny DA, editor. New York: The Guilford Press.
> > Estimating test-retest reliability using a structural equation model is another matter for another time. > Ryan > On Fri, Dec 21, 2012 at 9:35 AM, E. Bernardo <[hidden email]> wrote:
>> >> Dear RB, Ulrich, et al. >> >> Sorry for the poor English. >> >> Let me rephrase the scenario. Our variable is latent with 10 items. The ten-item five point likert type questionnaire were administered to a sample in two different occasions, F1 then F2. So we have two correlated factors F1 and F2 and we want to treat them as latent variables using AMOS 20. We expect that the 10 items are loaded significantly (p<.0.05) to F1 and then to F2. However, the actual data showed that some items have insignificant (p>.05) factor loadings on F1 and F2; thus, the set of items that loaded to F1 are different from the set of items that loaded to F2. Our question was: Can we proceed to correlate F1 and F2. Is this not a measurement problem?
>> >> Eins >> >> >> B <[hidden email]> >> To: [hidden email]
>> Sent: Thursday, December 20, 2012 9:16 PM >> Subject: Re: Pretest to Posttest: A question of reliability >> >> My responses are interspersed below.
>>
>> On Thu, Dec 20, 2012 at 10:12 PM, E. Bernardo <[hidden email]> wrote: >> >> Dear Everyone,
>>
>> We do a pretest-posttest analysis for a unidimentional scale with 10 items (say, Q1, Q2, ...,Q10). Using the pretest data, only four items (Q2, Q3, Q4, Q5) were significant, while using the posttest data six items(Q1, Q2, Q6, Q8, Q9, Q10) were significant.
>> >> >> You are providing conflicting information above. You state that you ran a pretest-posttest analysis for a "unidimensional scale with 10 items" which suggests to me that you derived a composite score to use as the dependent variable (e.g., you computed a sum or mean across all items for each subject). However, you go on to suggest that you performed pre-post analyses per item. Please clarify.
>> >> >> Noticed that the scale has a different set of items in the pretest and posttest scenario. >> >> >> What do you mean that the scale has a different set of items during both measurement periods? Why?
>> >> >> Our question is, is the scale not reliable? >> >> >> The reliability of composite scores on a measuring instrument and/or reliability on change scores are unrelated to what you've been discussing thus far, in my estimation.
>> >> >> >> >> >> Suppose we extend the scenario above to more than two measures (say, Pretest, Posttest1, Posttest2, Posttest3, Postest4) so that we will use the Latent Growth Modeling(LGM) to model the latent change. Is it a requirement in LGM that all the scale indicators/items are significant across tests?
>> >> >> Technically, just because you have more than two measurement points does not make the analysis an LGC. You can have an LGC with only two points at which each subject was measured. Anyway, the answer to your final question is no.
>> >> >> >> Thank you for your inputs. >> >> Eins >> >> >> >>
> > |
Free forum by Nabble | Edit this page |