Posted by
Arthur Kramer on
Dec 22, 2006; 2:02pm
URL: http://spssx-discussion.165.s1.nabble.com/Sample-Means-tp1072828p1072836.html
Confidence intervals, as denoted by the name are ranges, and are used to
estimate the parameters from the statistics; they get wider or narrower
depending on how much confidence one wants in estimating the "true"
difference. In Samir's case it will be the difference between the two
sample means (because he says he's going to use an independent groups t). I
hope he provides and interprets the 95% C.I. and eta or d if (when!?) his
research obtains significance.
Arthur Kramer
-----Original Message-----
From: Richard Ristow [mailto:
[hidden email]]
Sent: Thursday, December 21, 2006 5:48 PM
To: Arthur Kramer; 'Samir Omerovi';
[hidden email]
Cc: 'Statisticsdoc'; 'Spousta Jan'; 'Rick Bello'
Subject: RE: Sample Means
At 05:15 PM 12/21/2006, Arthur Kramer wrote:
>Since Samir is assuming the responses constitute a scale, I am
>suggesting correlate the students scores on the scale with the
>non-student scores on the scale.
You may have mis-phrased this. You can't correlate the values of one
variable between two groups; a correlation is of two variables over one
set of observations. Now,
>[The problem] can yield a Pearson correlation obtained by using group
>membership as a dummy coded predictor [correlated with] the scale
>score. That might also be more appropriate with the large n, because
>as I said, with this many subjects any difference is apt to obtain
>significance. The correlation is just another way of measuring effect
>size, isn't it?
Well, relative effect size. Since this is an ANOVA problem (the
independent-samples t-test is the one-factor, two-level special case of
ANOVA), one might want to give R^2 (the square of your correlation
coefficient), i.e. percent of variance explained, to follow standard
ANOVA terminology.
I said, 'relative effect size'. For absolute effect size, you'd
estimate a confidence interval for the difference of the group means:
"the 95% confidence interval for the difference is 0.8 to 2.1 scale
levels."
>If he does want to go the non-parametric route, a Mann-Whitney U may
>provide some insight into the "scale" score.
Marta has warned about Mann-Whitney U when there are few (like 7)
possible values of the dependent variable, so there will be many ties.
I haven't worked this up from her tutorials, but I'd look at them
before using Mann-Whitney incautiously. (I'd used it, incautiously, in
many cases like this, before Marta's warning.)
>Doing a Chi-Square may necessitate multiple goodness-fit
>analyses: percent students saying "1" compared to the percentage of
>non-student saying "1", etc. up to seven--do you protect your type 1
>error then?
Well, the classic, basic, chi-square doesn't; it tests against the
single null hypothesis that the two categorical variables are
statistically independent.
But if the test non-independent, there's the question which cells are
most affected. For that, using the SPSS CROSSTABS procedure (and
borrowing from an earlier thread),
>Try adding, to subcommand CELLS, EXPECTED and ASRESID.
>
>EXPECTED is the expected number in the cell, if the null hypothesis is
>correct, and ASRESID is the adjusted standardized difference from
>observed and expected. Look for cells with ASRESID greater than 2 in
>absolute value; that will show cells where the difference is most
>important.