Reliability Test

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Reliability Test

fzain
Hello,

Through my quasi-experiment in math classroom, I have made pre and
post-exams. Each exam had 3-4 open-ended questions (not MCQ). Each question
had a different score. Each exam is summed up over 10.
I was asked to do the reliability test using spss for each exam.
The question is, should unify the question scores for the reliability test
to work? or I keep question scores as written on the exam?

This is a sample from my SPSS:

ID   Q1   Q2     Q3   Q4
1     2.5   3       1      1
2     2     2.25   0.5    2

Please advise.



--
Sent from: http://spssx-discussion.1045642.n5.nabble.com/

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Reliability Test

Rich Ulrich

This could use much more specific description.

Pre-Post:  Are the same questions given twice?
"not MCQ" I guess to be "not multiple choice".

"Each question has a different score" says ... what?
Does "summed up over 10", along with "each ... different",
say that the 3 or 4 questions are scored so that their total is 10,
with different values for each question?

Pre-Post suggests that you look at the paired t-test for
both the r (similarity) and the t-test (change).  Especially
if students are supposed to be learning what is on those
tests, it makes sense to perform the test on each item,
to tell you which ones show gains.  - You do seem to imply
that the same items are available for comparing.  You would
also want to look at the total score.

This is one form of "reliability" which can be severely limited
if either pre or post happen to average at the minimum or
maximum.  It is very frequently true that those who start out
knowing most end up knowing most, and that is what this
"reliability" r  is going to show. If you are really teaching a lot,
however, you might NOT expect (or get) a high "r"  for pre-post.

The measure of "internal consistency" provided by SPSS routine
Reliability shows how consistently a set of items-scores reflect a
single universe or single dimension. Presumably, the items
from Pre  and Post could show different levels of consistency,
so they should be tested separately.

--
Rich Ulrich




From: SPSSX(r) Discussion <[hidden email]> on behalf of fzain <[hidden email]>
Sent: Tuesday, March 3, 2020 12:26 PM
To: [hidden email] <[hidden email]>
Subject: Reliability Test
 
Hello,

Through my quasi-experiment in math classroom, I have made pre and
post-exams. Each exam had 3-4 open-ended questions (not MCQ). Each question
had a different score. Each exam is summed up over 10.
I was asked to do the reliability test using spss for each exam.
The question is, should unify the question scores for the reliability test
to work? or I keep question scores as written on the exam?

This is a sample from my SPSS:

ID   Q1   Q2     Q3   Q4
1     2.5   3       1      1
2     2     2.25   0.5    2

Please advise.



--
Sent from: http://spssx-discussion.1045642.n5.nabble.com/

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD