http://spssx-discussion.165.s1.nabble.com/Sample-Means-tp1072828p1072835.html
That is a safe choice to make, based on the clarifications you provided.
John.
>I just wanted to thank Richard, Jan, John, Arthur, Rick, Stephen and all
>others for comments and
>suggestions. For the record, I decided to go on with Independent T-test:)
>
>Merry Christmas and Happy New Year to all
>
>Thanks once more
>
>Regards from Bosnia
>
>Samir
>
>
>-----Original Message-----
>From: SPSSX(r) Discussion [mailto:
[hidden email]] On Behalf Of
>Richard Ristow
>Sent: Thursday, December 21, 2006 11:48 PM
>To:
[hidden email]
>Subject: Re: Sample Means
>
>At 05:15 PM 12/21/2006, Arthur Kramer wrote:
>
> >Since Samir is assuming the responses constitute a scale, I am
> >suggesting correlate the students scores on the scale with the
> >non-student scores on the scale.
>
>You may have mis-phrased this. You can't correlate the values of one
>variable between two groups; a correlation is of two variables over one
>set of observations. Now,
>
> >[The problem] can yield a Pearson correlation obtained by using group
> >membership as a dummy coded predictor [correlated with] the scale
> >score. That might also be more appropriate with the large n, because
> >as I said, with this many subjects any difference is apt to obtain
> >significance. The correlation is just another way of measuring effect
> >size, isn't it?
>
>Well, relative effect size. Since this is an ANOVA problem (the
>independent-samples t-test is the one-factor, two-level special case of
>ANOVA), one might want to give R^2 (the square of your correlation
>coefficient), i.e. percent of variance explained, to follow standard
>ANOVA terminology.
>
>I said, 'relative effect size'. For absolute effect size, you'd
>estimate a confidence interval for the difference of the group means:
>"the 95% confidence interval for the difference is 0.8 to 2.1 scale
>levels."
>
> >If he does want to go the non-parametric route, a Mann-Whitney U may
> >provide some insight into the "scale" score.
>
>Marta has warned about Mann-Whitney U when there are few (like 7)
>possible values of the dependent variable, so there will be many ties.
>I haven't worked this up from her tutorials, but I'd look at them
>before using Mann-Whitney incautiously. (I'd used it, incautiously, in
>many cases like this, before Marta's warning.)
>
> >Doing a Chi-Square may necessitate multiple goodness-fit
> >analyses: percent students saying "1" compared to the percentage of
> >non-student saying "1", etc. up to seven--do you protect your type 1
> >error then?
>
>Well, the classic, basic, chi-square doesn't; it tests against the
>single null hypothesis that the two categorical variables are
>statistically independent.
>
>But if the test non-independent, there's the question which cells are
>most affected. For that, using the SPSS CROSSTABS procedure (and
>borrowing from an earlier thread),
>
> >Try adding, to subcommand CELLS, EXPECTED and ASRESID.
> >
> >EXPECTED is the expected number in the cell, if the null hypothesis is
> >correct, and ASRESID is the adjusted standardized difference from
> >observed and expected. Look for cells with ASRESID greater than 2 in
> >absolute value; that will show cells where the difference is most
> >important.
Prof. John Antonakis