Hello
I am considering work that has already been carried out in relation to the comparison of continuous measurements obtained using two procedures. For each procedure, there is more than one rater but the raters are not necessarily identical across the two procedures. Suppose the sample variances for the two procedures are denoted by 'sampvar1' and 'sampvar2', respectively and that the covariance between the measurements under procedures 1 and 2 is denoted by Cov(X_1, X_2). The formula Cov(X_1, X_2)/sqrt(sampvar1*sampvar2) appears to be an analogue of the Pearson Product Moment Correlation Coefficient (in which there are no repeated measures). However, I am having some difficulty understanding how this formula works in practice with repeated measures. In particular: 1) can the coefficient be calculated using SPSS when there are repeated measures and if so can you please provide some assistance? 2) can you provide a reference to explain the underlying mathematics? One main conceptual difficulty I am experiencing here is that of understanding which values would be matched with one another in determining the covariance between measurements from procedure 1 and procedure 2 when it is already known that the raters in both methods are not identical. This problem carries over, of course to the interpretation of the correlation coefficient itself. I look forward very much to having some light shed on these issues. Many thanks Best wishes Margaret --------------------------------- New Yahoo! Mail is the ultimate force in competitive emailing. Find out more at the Yahoo! Mail Championships. Plus: play games and win prizes. |
Margaret,
A critical issue is that the subjects are matched - is this the case? HTH, Stephen Brand For personalized and professional consultation in statistics and research design, visit www.statisticsdoc.com -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]]On Behalf Of Margaret MacDougall Sent: Wednesday, February 21, 2007 6:19 AM To: [hidden email] Subject: Correlation coefficient with repeated measures Hello I am considering work that has already been carried out in relation to the comparison of continuous measurements obtained using two procedures. For each procedure, there is more than one rater but the raters are not necessarily identical across the two procedures. Suppose the sample variances for the two procedures are denoted by 'sampvar1' and 'sampvar2', respectively and that the covariance between the measurements under procedures 1 and 2 is denoted by Cov(X_1, X_2). The formula Cov(X_1, X_2)/sqrt(sampvar1*sampvar2) appears to be an analogue of the Pearson Product Moment Correlation Coefficient (in which there are no repeated measures). However, I am having some difficulty understanding how this formula works in practice with repeated measures. In particular: 1) can the coefficient be calculated using SPSS when there are repeated measures and if so can you please provide some assistance? 2) can you provide a reference to explain the underlying mathematics? One main conceptual difficulty I am experiencing here is that of understanding which values would be matched with one another in determining the covariance between measurements from procedure 1 and procedure 2 when it is already known that the raters in both methods are not identical. This problem carries over, of course to the interpretation of the correlation coefficient itself. I look forward very much to having some light shed on these issues. Many thanks Best wishes Margaret --------------------------------- New Yahoo! Mail is the ultimate force in competitive emailing. Find out more at the Yahoo! Mail Championships. Plus: play games and win prizes. |
Dear Stephen
Yes they are, but as the measurements are to be compared across procedures and the two raters for procedure 1 are not the same as for procedure 2, I still need help. (It might help if we assume that there are two raters for each procedure.) Would it be appropriate for me to lay out the data in two columns with the following structure: column 1 contains all results for rater 1 (procedure 1) followed by all results for rater 2 (procedure 1) followed by all results for rater 1 (procedure 1) followed by all results for rater 2 (procedure 1) column 2 contains all results for rater 3 (procedure 2) followed by all results for rater 4 (procedure 2) followed by all results for rater 4 (procedure 2) followed by all results for rater 3 (procedure 2)? Then presumably the two columns would be treated as though there were no repeated measures and therefore the corresponding Pearson correlation coefficient would be calculated in the usual way. This is a guess on my part but it does seem to make sense. I look forward to receiving any necessary corrections! Best wishes Margaret Statisticsdoc <[hidden email]> wrote: Margaret, A critical issue is that the subjects are matched - is this the case? HTH, Stephen Brand For personalized and professional consultation in statistics and research design, visit www.statisticsdoc.com -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]]On Behalf Of Margaret MacDougall Sent: Wednesday, February 21, 2007 6:19 AM To: [hidden email] Subject: Correlation coefficient with repeated measures Hello I am considering work that has already been carried out in relation to the comparison of continuous measurements obtained using two procedures. For each procedure, there is more than one rater but the raters are not necessarily identical across the two procedures. Suppose the sample variances for the two procedures are denoted by 'sampvar1' and 'sampvar2', respectively and that the covariance between the measurements under procedures 1 and 2 is denoted by Cov(X_1, X_2). The formula Cov(X_1, X_2)/sqrt(sampvar1*sampvar2) appears to be an analogue of the Pearson Product Moment Correlation Coefficient (in which there are no repeated measures). However, I am having some difficulty understanding how this formula works in practice with repeated measures. In particular: 1) can the coefficient be calculated using SPSS when there are repeated measures and if so can you please provide some assistance? 2) can you provide a reference to explain the underlying mathematics? One main conceptual difficulty I am experiencing here is that of understanding which values would be matched with one another in determining the covariance between measurements from procedure 1 and procedure 2 when it is already known that the raters in both methods are not identical. This problem carries over, of course to the interpretation of the correlation coefficient itself. I look forward very much to having some light shed on these issues. Many thanks Best wishes Margaret --------------------------------- New Yahoo! Mail is the ultimate force in competitive emailing. Find out more at the Yahoo! Mail Championships. Plus: play games and win prizes. --------------------------------- What kind of emailer are you? Find out today - get a free analysis of your email personality. Take the quiz at the Yahoo! Mail Championship. |
In reply to this post by Margaret MacDougall
Margaret,
If you were not interested in rater effects, because the subjects are matched across procedures, it would be possible to compute a correlation coefficient between procedures. However, this way of analyzing the data does not get at the question of whether having the same rater in both cases results in a larger correlation than having different raters. I think that you want 4 columns - procedure 1 rating, procedure 1 rater, procedure 2 rating, procedure 2 rater. How many raters did you use? Were all raters involved in rating both times? Was there any association between who did the rating at time 1 and time 2? There are a number of ways of looking at rater effects. Some relatively simple places to start include: 1.) Compare the mean and sd for each rater. 2.) Correlate ratings for each combination of raters across procedure 1 and 2 (including rater1-procedure1 with rater2 procedure. You can also consider looking at the these data in a repeated measures, or a hierarchical linear model framework. HTH, Stephen Brand ---- Margaret MacDougall <[hidden email]> wrote: > Dear Stephen > > Yes they are, but as the measurements are to be compared across procedures and the two raters for procedure 1 are not the same as for procedure 2, > I still need help. (It might help if we assume that there are two raters for each procedure.) Would it be appropriate for me to lay out the data in two columns with the following structure: > > column 1 contains all results for rater 1 (procedure 1) followed by all results for rater 2 (procedure 1) followed by all results for rater 1 (procedure 1) followed by all results for rater 2 (procedure 1) > > column 2 contains all results for rater 3 (procedure 2) followed by all results for rater 4 (procedure 2) followed by all results for rater 4 (procedure 2) followed by all results for rater 3 (procedure 2)? > > Then presumably the two columns would be treated as though there were no repeated measures and therefore the corresponding Pearson correlation coefficient would be calculated in the usual way. > > This is a guess on my part but it does seem to make sense. > > I look forward to receiving any necessary corrections! > > Best wishes > > Margaret > > Statisticsdoc <[hidden email]> wrote: > Margaret, > > A critical issue is that the subjects are matched - is this the case? > > HTH, > > Stephen Brand > > For personalized and professional consultation in statistics and research > design, visit > www.statisticsdoc.com > > > -----Original Message----- > From: SPSSX(r) Discussion [mailto:[hidden email]]On Behalf Of > Margaret MacDougall > Sent: Wednesday, February 21, 2007 6:19 AM > To: [hidden email] > Subject: Correlation coefficient with repeated measures > > > Hello > > I am considering work that has already been carried out in relation to the > comparison of continuous measurements obtained using two procedures. For > each procedure, there is more than one rater but the raters are not > necessarily identical across the two procedures. Suppose the sample > variances for the two procedures are denoted by 'sampvar1' and 'sampvar2', > respectively and that the covariance between the measurements under > procedures 1 and 2 is denoted by Cov(X_1, X_2). The formula > Cov(X_1, X_2)/sqrt(sampvar1*sampvar2) appears to be an analogue of the > Pearson Product Moment Correlation Coefficient (in which there are no > repeated measures). > > However, I am having some difficulty understanding how this formula works > in practice with repeated measures. In particular: > > 1) can the coefficient be calculated using SPSS when there are repeated > measures and if so can you please provide some assistance? > > 2) can you provide a reference to explain the underlying mathematics? > > One main conceptual difficulty I am experiencing here is that of > understanding which values would be matched with one another in determining > the covariance between measurements from procedure 1 and procedure 2 when it > is already known that the raters in both methods are not identical. This > problem carries over, of course to the interpretation of the correlation > coefficient itself. > > I look forward very much to having some light shed on these issues. > > Many thanks > > Best wishes > > Margaret > > > > --------------------------------- > New Yahoo! Mail is the ultimate force in competitive emailing. Find out > more at the Yahoo! Mail Championships. Plus: play games and win prizes. > > > > > --------------------------------- > What kind of emailer are you? Find out today - get a free analysis of your email personality. Take the quiz at the Yahoo! Mail Championship. -- For personalized and experienced consulting in statistics and research design, visit www.statisticsdoc.com |
Dear Stephen
Thank you for your reply. Unfortunately, we are moving away from my original question and therefore I am no further ahead. I am already working with a two-way mixed model and calculating ICCs in relation to this model. I am not in search of an answer as to which is the best model to use. Rather, in my reading, I am studying a proof which assumes the notion of a correlation coefficient for this type of data in the sense I described in detail in my first e-mail. As previously mentioned, the raters under both procedures are not necessarily the same and for the sake of argument, I am assuming that there are two raters per procedure. Using the formula I provided in my first e-mail, the author refers to a single correlation coefficient for comparing a measurement under procedure 1 with a measurement under procedure 2. If possible, could someone please confirm whether the suggestion I made in my last e-mail is correct. For convenience, I offer the relevant details again below: Would it be appropriate for me to lay out the data in two columns with the following structure: > > column 1 contains all results for rater 1 (procedure 1) followed by all results for rater 2 (procedure 1) followed by all results for rater 1 (procedure 1) followed by all results for rater 2 (procedure 1) > > column 2 contains all results for rater 3 (procedure 2) followed by all results for rater 4 (procedure 2) followed by all results for rater 4 (procedure 2) followed by all results for rater 3 (procedure 2)? > > Then presumably the two columns would be treated as though there were no repeated measures and therefore the corresponding Pearson correlation coefficient would be calculated in the usual way. Thanks Best wishes Margaret Statisticsdoc <[hidden email]> wrote: Margaret, If you were not interested in rater effects, because the subjects are matched across procedures, it would be possible to compute a correlation coefficient between procedures. However, this way of analyzing the data does not get at the question of whether having the same rater in both cases results in a larger correlation than having different raters. I think that you want 4 columns - procedure 1 rating, procedure 1 rater, procedure 2 rating, procedure 2 rater. How many raters did you use? Were all raters involved in rating both times? Was there any association between who did the rating at time 1 and time 2? There are a number of ways of looking at rater effects. Some relatively simple places to start include: 1.) Compare the mean and sd for each rater. 2.) Correlate ratings for each combination of raters across procedure 1 and 2 (including rater1-procedure1 with rater2 procedure. You can also consider looking at the these data in a repeated measures, or a hierarchical linear model framework. HTH, Stephen Brand ---- Margaret MacDougall wrote: > Dear Stephen > > Yes they are, but as the measurements are to be compared across procedures and the two raters for procedure 1 are not the same as for procedure 2, > I still need help. (It might help if we assume that there are two raters for each procedure.) Would it be appropriate for me to lay out the data in two columns with the following structure: > > column 1 contains all results for rater 1 (procedure 1) followed by all results for rater 2 (procedure 1) followed by all results for rater 1 (procedure 1) followed by all results for rater 2 (procedure 1) > > column 2 contains all results for rater 3 (procedure 2) followed by all results for rater 4 (procedure 2) followed by all results for rater 4 (procedure 2) followed by all results for rater 3 (procedure 2)? > > Then presumably the two columns would be treated as though there were no repeated measures and therefore the corresponding Pearson correlation coefficient would be calculated in the usual way. > > This is a guess on my part but it does seem to make sense. > > I look forward to receiving any necessary corrections! > > Best wishes > > Margaret > > Statisticsdoc wrote: > Margaret, > > A critical issue is that the subjects are matched - is this the case? > > HTH, > > Stephen Brand > > For personalized and professional consultation in statistics and research > design, visit > www.statisticsdoc.com > > > -----Original Message----- > From: SPSSX(r) Discussion [mailto:[hidden email]]On Behalf Of > Margaret MacDougall > Sent: Wednesday, February 21, 2007 6:19 AM > To: [hidden email] > Subject: Correlation coefficient with repeated measures > > > Hello > > I am considering work that has already been carried out in relation to the > comparison of continuous measurements obtained using two procedures. For > each procedure, there is more than one rater but the raters are not > necessarily identical across the two procedures. Suppose the sample > variances for the two procedures are denoted by 'sampvar1' and 'sampvar2', > respectively and that the covariance between the measurements under > procedures 1 and 2 is denoted by Cov(X_1, X_2). The formula > Cov(X_1, X_2)/sqrt(sampvar1*sampvar2) appears to be an analogue of the > Pearson Product Moment Correlation Coefficient (in which there are no > repeated measures). > > However, I am having some difficulty understanding how this formula works > in practice with repeated measures. In particular: > > 1) can the coefficient be calculated using SPSS when there are repeated > measures and if so can you please provide some assistance? > > 2) can you provide a reference to explain the underlying mathematics? > > One main conceptual difficulty I am experiencing here is that of > understanding which values would be matched with one another in determining > the covariance between measurements from procedure 1 and procedure 2 when it > is already known that the raters in both methods are not identical. This > problem carries over, of course to the interpretation of the correlation > coefficient itself. > > I look forward very much to having some light shed on these issues. > > Many thanks > > Best wishes > > Margaret > > > > --------------------------------- > New Yahoo! Mail is the ultimate force in competitive emailing. Find out > more at the Yahoo! Mail Championships. Plus: play games and win prizes. > > > > > --------------------------------- > What kind of emailer are you? Find out today - get a free analysis of your email personality. Take the quiz at the Yahoo! Mail Championship. -- For personalized and experienced consulting in statistics and research design, visit www.statisticsdoc.com --------------------------------- New Yahoo! Mail is the ultimate force in competitive emailing. Find out more at the Yahoo! Mail Championships. Plus: play games and win prizes. |
Free forum by Nabble | Edit this page |