Hello everyone,
Please send me your answer to [hidden email]. I have a question on the type of data to enter to calculate a test reliability: 1. do I enter the score for the test? Or 2. do I enter zeros or ones for each question per subject (depending if their answer is right or wrong). Thanks a lot, Celina =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Celina Byers, Ph. D. Assistant Professor Department of Instructional Technology College of Science and Technology Bloomsburg University 2221 McCormick 400 East 2nd Street Bloomsburg, PA 17815 Voice: 570-389-5485 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= |
In order to see if the set of items is internally consistent, i.e.,
calculate coefficient alpha, you need to enter the items. However, if you have multiple choice items you should enter items directly the way the are on the answer sheet. Then you should proofread the data completely before doing anything else. Then use RECODE to change to right/wrong coding. Art Kendall Social Research Consultants Byers, Celina wrote: > Hello everyone, > > Please send me your answer to [hidden email]. > > I have a question on the type of data to enter to calculate a test > reliability: > 1. do I enter the score for the test? > Or > 2. do I enter zeros or ones for each question per subject (depending if > their answer is right or wrong). > > Thanks a lot, > > Celina > > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= > Celina Byers, Ph. D. > Assistant Professor > Department of Instructional Technology > College of Science and Technology > Bloomsburg University > 2221 McCormick > 400 East 2nd Street > Bloomsburg, PA 17815 > Voice: 570-389-5485 > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= > > >
Art Kendall
Social Research Consultants |
In reply to this post by Byers, Celina
The primary purpose of Reliability is to determine whether an item or
scale is free from measurement error by measuring internal consistency. It really can't be used on nominal level data (such as multiple choice answers). Cronbach's alpha on the 'correctness' of a response would tell you that if someone answered one item correctly, then they are more likely to answer other items in your test correctly. I doubt that is what you want. Melissa -----Original Message----- From: Byers, Celina [mailto:[hidden email]] Sent: Saturday, February 17, 2007 6:33 AM To: Melissa Ives Subject: RE: [SPSSX-L] Reliability test Melissa, The test items I am analyzing are multiple choice and each has 4 alternatives with only one correct answer,. My question is: Is it enough to enter the data as in I did in the table of the file attached? Or do I need to enter the 4 alternatives for each question with 1 and 0 (righ and wrong)? Thanks do much, Celina =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Celina Byers, Ph. D. Assistant Professor Department of Instructional Technology College of Science and Technology Bloomsburg University 2221 McCormick 400 East 2nd Street Bloomsburg, PA 17815 Voice: 570-389-5485 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= -----Original Message----- From: Melissa Ives [mailto:[hidden email]] Sent: Friday, February 16, 2007 4:08 PM To: Byers, Celina Subject: RE: [SPSSX-L] Reliability test Enter the individual items and the name of the test/scale. Here is some syntax that we use--replace var1 to varn and SCALE with your appropriate variable names and the scale name. * Comment can indicate the test name. RELIABILITY /VARIABLES= var1 to varn /scale(SCALE) = ALL /FORMAT=NOLABELS /MODEL=ALPHA /STATISTICS=DESCRIPTIVE SCALE CORR COV /SUMMARY=TOTAL MEANS VARIANCE CORR . Melissa The bubbling brook would lose its song if you removed the rocks. -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Byers, Celina Sent: Friday, February 16, 2007 12:40 PM To: [hidden email] Subject: [SPSSX-L] Reliability test Hello everyone, Please send me your answer to [hidden email]. I have a question on the type of data to enter to calculate a test reliability: 1. do I enter the score for the test? Or 2. do I enter zeros or ones for each question per subject (depending if their answer is right or wrong). Thanks a lot, Celina =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Celina Byers, Ph. D. Assistant Professor Department of Instructional Technology College of Science and Technology Bloomsburg University 2221 McCormick 400 East 2nd Street Bloomsburg, PA 17815 Voice: 570-389-5485 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= PRIVILEGED AND CONFIDENTIAL INFORMATION This transmittal and any attachments may contain PRIVILEGED AND CONFIDENTIAL information and is intended only for the use of the addressee. If you are not the designated recipient, or an employee or agent authorized to deliver such transmittals to the designated recipient, you are hereby notified that any dissemination, copying or publication of this transmittal is strictly prohibited. If you have received this transmittal in error, please notify us immediately by replying to the sender and delete this copy from your system. You may also call us at (309) 827-6026 for assistance. |
In reply to this post by Byers, Celina
Celina,
As Art Kendall has suggested, Cronbach's alpha is a good indicator of the internal consistency of your items, once they have been recoded into dichotmous correct / incorrect format - for example. * Test item item01 where A = 1 B = 2 C = 3 D = 4 correct answer is B . recode item01 (1 3 4 = 0) (2 = 1) into ritem01 . * Test item item02 where A = 1 B = 2 C = 3 D = 4 correct answer is D . recode item02 (1 2 3 = 0) (4 = 1) into ritem02 . (it is usually a good idea to recode into a new item). You would then utilize RELIABILITY to compute the internal consistency of the binary correct / incorrect items. This is a widely accepted practice for evaluating the internal consistency of multiple choice tests (although there are more advanced and sophisticated avenues to pursue, such as IRT). To the extent that the items on your test measures a unitary underlying dimension of knowledge or skill, Cronbach's alpha for the test will be high. A low value for alpha indicates that the test is in need of refinement. Some common causes for low alpha are items that do not correlate well with the total score because: a.) they measure a different dimension, b.) they elicit random answering and guessing; c.) they are badly worded or hard to understand; d.) they are too easy, and everyone gets them right; e.) they are too hard and no-one gets them right. For a measure of cognitive ability or achievement, standards for alpha are generally pretty high (greater than .9 for a long test, at least .8). I am writing from home now, but if citations would be helpful please e-mail me and will get some when I am back in the office. Best Wishes, Stephen Brand For personalized and professional consultation in statistics and research design, visit www.statisticsdoc.com -----Original Message----- From: SPSSX(r) Discussion [mailto:[hidden email]]On Behalf Of Byers, Celina Sent: Friday, February 16, 2007 1:40 PM To: [hidden email] Subject: Reliability test Hello everyone, Please send me your answer to [hidden email]. I have a question on the type of data to enter to calculate a test reliability: 1. do I enter the score for the test? Or 2. do I enter zeros or ones for each question per subject (depending if their answer is right or wrong). Thanks a lot, Celina =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Celina Byers, Ph. D. Assistant Professor Department of Instructional Technology College of Science and Technology Bloomsburg University 2221 McCormick 400 East 2nd Street Bloomsburg, PA 17815 Voice: 570-389-5485 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= |
Free forum by Nabble | Edit this page |