Sample Means

classic Classic list List threaded Threaded
18 messages Options
Reply | Threaded
Open this post in threaded view
|

Sample Means

statisticsdoc
Stephen Brand
www.statisticsdoc.com

Richard,

Thanks for citing me on both sides of this discussion :)   Let me say a little more about why I would accept that 1,000 cases can constitute a population, and under what conditions.

It is not too hard to imagine population definitions that encompass small numbers of people (e.g., all of the left-handed residents of the town of Exeter, Rhode Island; the Fall 2006 intake of a small college).

The question of whether you accept that 1,000 cases make up a population depends on the definition of the population.  If these 1,000 cases are all of the potential members of the population, then the mean of those cases constitutes the population mean.  Whatever random processes might have influence the mean score of that population, that score is the population parameter.  We are not trying to estimate a parameter of a wider population from which we have obtained the 1,000 cases.  In this instance, the one-sample procedure is justified.

Granted, you might say that the left-handed residents of Exeter, or the 2006 intake of a small college, constitute a sub-set of your population of interest, but then I think that you have to allow that these cases do not exhaust the potential membership of the population (which might constitute the left-handed population of Rhode Island, or the various cohorts of potential incoming first year students), and then your means become sample statistics, not population parameters.   BTW, in this instance, the Exeter sample is not a very random one :)

It all depends on where the boundaries of the population are drawn.

Best,

Stephen Brand



Stephen Brand

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Richard Ristow
Sent: Tuesday, December 19, 2006 2:59 PM
To: [hidden email]
Subject: Re: Significant difference - Means

To weigh in with two comments:

At 03:54 AM 12/19/2006, Spousta Jan wrote:

>The error of that 3.5 is about sqrt(1/1000) = 0,03 while the error of
>2.9 for students is about sqrt(1/500) = 0.045. That is both errors are
>of the same order of magnitude and the population error cannot be
>neglected in this case.

I'd like to second, and emphasize, this. Jan is clearly right here,
where the two groups are the same size. However, the same thing holds
when the sizes are quite different.

First, the t-test algorithm correctly allows for the increased
precision in measuring the mean in the larger group. Replacing it by a
constant only 'gains' you a little precision you don't really have.

Second, inequality of group size matters less than one might think.
Roughly, precision goes as the square root of sample size. (Under
'nice' conditions, that's exact: standard error of estimate goes as the
square root of sample size.) That means increasing the sample size
ten-fold leaves the SEE still 1/3 of the size it had - quite a long way
from letting it be considered a constant.

And at 10:42 AM 12/19/2006, Statisticsdoc (Stephen Brand) wrote:

>If your population consists of the 1000 students, then the mean of 3.5
>is a population parameter, and you would be justified is using the
>one-sample t-test suggested by John.

(This won't be quite fair to Stephen Brand, who'd also written
"Formally one should test the null hypothesis that the two samples have
the same mean, by using the independent groups t-test.")

There's a philosophical position, which I agree with, that will hardly
ever accept something like "[my] population consists of the 1000
students." The argument is that, even if those 1,000 students are all
you've ever seen or ever will see, their observed values constitute a
set generated by an underlying random mechanism, and that randomness
must be allowed for in estimation exactly as if you were aware of
100,000 similar students.

('Generated by an underlying random mechanism' is sometimes expressed
as 'drawn from a conceptually infinite population.' However, while this
is technically accurate, I don't blame anyone who considers a
'conceptually infinite population' a very odd notion.)


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.15.25/593 - Release Date: 12/19/2006 1:17 PM

--
For personalized and experienced consulting in statistics and research design, visit www.statisticsdoc.com
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Spousta Jan
Now it is my turn to support Richard a bit :-)

If the 1,000 cases were the whole available population and 500 of them
were students, the the one-sample procedure would be still
_unjustified_, because then both the population mean and the
subpoplation mean are constants and it is nonsense to test the
difference between two constants. If the two are different, then the
difference is always "significant" in the exact meaning of the word.

The interesting case is when the sampled population is _almost_ the
whole available population (e.g. five students from the 500 are missing)
, but then the statistics starts to be rather complicated and you still
cannot use the "standard" techniques under Compare Means in SPSS. Ask
Marta, she will tell you...

Jan

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Statisticsdoc
Sent: Tuesday, December 19, 2006 10:42 PM
To: [hidden email]
Subject: Sample Means

Stephen Brand
www.statisticsdoc.com

Richard,

Thanks for citing me on both sides of this discussion :)   Let me say a
little more about why I would accept that 1,000 cases can constitute a
population, and under what conditions.

It is not too hard to imagine population definitions that encompass
small numbers of people (e.g., all of the left-handed residents of the
town of Exeter, Rhode Island; the Fall 2006 intake of a small college).

The question of whether you accept that 1,000 cases make up a population
depends on the definition of the population.  If these 1,000 cases are
all of the potential members of the population, then the mean of those
cases constitutes the population mean.  Whatever random processes might
have influence the mean score of that population, that score is the
population parameter.  We are not trying to estimate a parameter of a
wider population from which we have obtained the 1,000 cases.  In this
instance, the one-sample procedure is justified.

Granted, you might say that the left-handed residents of Exeter, or the
2006 intake of a small college, constitute a sub-set of your population
of interest, but then I think that you have to allow that these cases do
not exhaust the potential membership of the population (which might
constitute the left-handed population of Rhode Island, or the various
cohorts of potential incoming first year students), and then your means
become sample statistics, not population parameters.   BTW, in this
instance, the Exeter sample is not a very random one :)

It all depends on where the boundaries of the population are drawn.

Best,

Stephen Brand



Stephen Brand

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Richard Ristow
Sent: Tuesday, December 19, 2006 2:59 PM
To: [hidden email]
Subject: Re: Significant difference - Means

To weigh in with two comments:

At 03:54 AM 12/19/2006, Spousta Jan wrote:

>The error of that 3.5 is about sqrt(1/1000) = 0,03 while the error of
>2.9 for students is about sqrt(1/500) = 0.045. That is both errors are
>of the same order of magnitude and the population error cannot be
>neglected in this case.

I'd like to second, and emphasize, this. Jan is clearly right here,
where the two groups are the same size. However, the same thing holds
when the sizes are quite different.

First, the t-test algorithm correctly allows for the increased precision
in measuring the mean in the larger group. Replacing it by a constant
only 'gains' you a little precision you don't really have.

Second, inequality of group size matters less than one might think.
Roughly, precision goes as the square root of sample size. (Under 'nice'
conditions, that's exact: standard error of estimate goes as the square
root of sample size.) That means increasing the sample size ten-fold
leaves the SEE still 1/3 of the size it had - quite a long way from
letting it be considered a constant.

And at 10:42 AM 12/19/2006, Statisticsdoc (Stephen Brand) wrote:

>If your population consists of the 1000 students, then the mean of 3.5
>is a population parameter, and you would be justified is using the
>one-sample t-test suggested by John.

(This won't be quite fair to Stephen Brand, who'd also written "Formally
one should test the null hypothesis that the two samples have the same
mean, by using the independent groups t-test.")

There's a philosophical position, which I agree with, that will hardly
ever accept something like "[my] population consists of the 1000
students." The argument is that, even if those 1,000 students are all
you've ever seen or ever will see, their observed values constitute a
set generated by an underlying random mechanism, and that randomness
must be allowed for in estimation exactly as if you were aware of
100,000 similar students.

('Generated by an underlying random mechanism' is sometimes expressed as
'drawn from a conceptually infinite population.' However, while this is
technically accurate, I don't blame anyone who considers a 'conceptually
infinite population' a very odd notion.)


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.15.25/593 - Release Date:
12/19/2006 1:17 PM

--
For personalized and experienced consulting in statistics and research
design, visit www.statisticsdoc.com
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

statisticsdoc
In reply to this post by statisticsdoc
Stephen Brand
www.statisticsdoc.com

Jan,

I think there is still some confusion about the research question that Samir is trying to answer, and even the sampling design.   I will try to cover the various possibilities.

Your posting is quite correct if we assume that there are only 500 students in the population of 1,000 cases  - i.e. the 1000 cases are made of 500 students and 500 non-students).  If in fact the larger population of 1000 cases contains only 500 students, then there is no need to utilize inferential statistics - there is nothing to infer.  There is no null hypothesis to test, both means are constants, and what you say is correct.  Samir may still be interested in knowing whether the difference between two subpopulation means is interesting and meaningful, but that is not a question of statistical significance in the sense of testing an inference about population paramters from sample statistics.  As another poster pointed out, computing the effect size would be informative.

On the other hand, if the 500 students comprise a sample of students that was drawn in some way from a larger population, the additional procedures are justified (either the z-statistic or the one-sample t-test).

On possibility is that the 500 students were a sample drawn from the population of 1,000.  That is, there are 1,000 students, and Samir has drawn a subsample of 500 of them.  Samir may be interested in knowing whether the sampling process that he used was unbiased.  Assuming that he knows not only the mean but the population standard deviation from the population of 1,000, he can compute the distribution of sampling means with n=500 and apply the z-statistic the calculate the likelihood of obtaining the observed sample mean if the sampling process was random and unbiased.

Another possibility is that the sample 500 cases were drawn from some population other than the 1,000 cases.  This is the scenario that I had in mind when I posted that the one-sample t-test would be justified.   In this instance, Samir would be interested in testing the hypothesis that the sample of 500 students was drawn from a population whose mean was equal to the mean of the population of 1,000 cases.

My conclusion is that Richard and I are both right, and so are you.

Cheers,

Stephen Brand

P.S. I think that I might use this example as an extra-credit question on my next stats exam :)



Jan Spousta Wrote:

Now it is my turn to support Richard a bit :-)

If the 1,000 cases were the whole available population and 500 of them were students, the the one-sample procedure would be still _unjustified_, because then both the population mean and the subpoplation mean are constants and it is nonsense to test the difference between two constants. If the two are different, then the difference is always "significant" in the exact meaning of the word.

The interesting case is when the sampled population is _almost_ the whole available population (e.g. five students from the 500 are missing) , but then the statistics starts to be rather complicated and you still cannot use the "standard" techniques under Compare Means in SPSS. Ask Marta, she will tell you...

Jan

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Statisticsdoc
Sent: Tuesday, December 19, 2006 10:42 PM
To: [hidden email]
Subject: Sample Means

Stephen Brand
www.statisticsdoc.com

Richard,

Thanks for citing me on both sides of this discussion :)   Let me say a
little more about why I would accept that 1,000 cases can constitute a population, and under what conditions.

It is not too hard to imagine population definitions that encompass small numbers of people (e.g., all of the left-handed residents of the town of Exeter, Rhode Island; the Fall 2006 intake of a small college).

The question of whether you accept that 1,000 cases make up a population depends on the definition of the population.  If these 1,000 cases are all of the potential members of the population, then the mean of those cases constitutes the population mean.  Whatever random processes might have influence the mean score of that population, that score is the population parameter.  We are not trying to estimate a parameter of a wider population from which we have obtained the 1,000 cases.  In this instance, the one-sample procedure is justified.

Granted, you might say that the left-handed residents of Exeter, or the 2006 intake of a small college, constitute a sub-set of your population of interest, but then I think that you have to allow that these cases do not exhaust the potential membership of the population (which might constitute the left-handed population of Rhode Island, or the various cohorts of potential incoming first year students), and then your means
become sample statistics, not population parameters.   BTW, in this
instance, the Exeter sample is not a very random one :)

It all depends on where the boundaries of the population are drawn.

Best,

Stephen Brand



Stephen Brand

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Richard Ristow
Sent: Tuesday, December 19, 2006 2:59 PM
To: [hidden email]
Subject: Re: Significant difference - Means

To weigh in with two comments:

At 03:54 AM 12/19/2006, Spousta Jan wrote:

>The error of that 3.5 is about sqrt(1/1000) = 0,03 while the error of
>2.9 for students is about sqrt(1/500) = 0.045. That is both errors are
>of the same order of magnitude and the population error cannot be
>neglected in this case.

I'd like to second, and emphasize, this. Jan is clearly right here, where the two groups are the same size. However, the same thing holds when the sizes are quite different.

First, the t-test algorithm correctly allows for the increased precision in measuring the mean in the larger group. Replacing it by a constant only 'gains' you a little precision you don't really have.

Second, inequality of group size matters less than one might think. Roughly, precision goes as the square root of sample size. (Under 'nice' conditions, that's exact: standard error of estimate goes as the square root of sample size.) That means increasing the sample size ten-fold leaves the SEE still 1/3 of the size it had - quite a long way from letting it be considered a constant.

And at 10:42 AM 12/19/2006, Statisticsdoc (Stephen Brand) wrote:

>If your population consists of the 1000 students, then the mean of 3.5
>is a population parameter, and you would be justified is using the
>one-sample t-test suggested by John.

(This won't be quite fair to Stephen Brand, who'd also written "Formally one should test the null hypothesis that the two samples have the same mean, by using the independent groups t-test.")

There's a philosophical position, which I agree with, that will hardly ever accept something like "[my] population consists of the 1000 students." The argument is that, even if those 1,000 students are all you've ever seen or ever will see, their observed values constitute a set generated by an underlying random mechanism, and that randomness must be allowed for in estimation exactly as if you were aware of 100,000 similar students.

('Generated by an underlying random mechanism' is sometimes expressed as 'drawn from a conceptually infinite population.' However, while this is technically accurate, I don't blame anyone who considers a 'conceptually infinite population' a very odd notion.)


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.15.25/593 - Release Date: 12/19/2006 1:17 PM

--
For personalized and experienced consulting in statistics and research design, visit www.statisticsdoc.com

--
For personalized and experienced consulting in statistics and research design, visit www.statisticsdoc.com
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Spousta Jan
In reply to this post by statisticsdoc
Yes, you are true, Stephen, from the general point of wiew. It is really
interesting how many different scenarios can be hidden in such a simple
situation. But if we return to the original question...

> what test should I use if I need to assess the significance of the
mean of some
> subgroup in comparison the mean of the total population. For example I
have 1000
> respondents that answered a question on 7-point scale. Their
mean/average is 3,5.
> I have 500 students among those 1000 respondents with mean/average
2,9. I want to
> know if these values are significantly different.

...then it seems to me (taking into account both my and Samir's somehow
limited knowledge of English) that the 1000 persons - probably a sample
from a much bigger population - are both students and non-students, and
that these 500 just happened to be students - they answered Yes when
asked "Are you a student?", while the others answered No. Therefore some
of the scenarios seem not applicable here.

But the real problem is, of course, how to help Samir in his first steps
over the deep swamp of applied statistics and not confuse him even more
than needed...

Greetings,

Jan


-----Original Message-----
From: Statisticsdoc [mailto:[hidden email]]
Sent: Wednesday, December 20, 2006 4:57 PM
To: [hidden email]
Cc: Spousta Jan
Subject: Re: Sample Means

Stephen Brand
www.statisticsdoc.com

Jan,

I think there is still some confusion about the research question that
Samir is trying to answer, and even the sampling design.   I will try to
cover the various possibilities.

Your posting is quite correct if we assume that there are only 500
students in the population of 1,000 cases  - i.e. the 1000 cases are
made of 500 students and 500 non-students).  If in fact the larger
population of 1000 cases contains only 500 students, then there is no
need to utilize inferential statistics - there is nothing to infer.
There is no null hypothesis to test, both means are constants, and what
you say is correct.  Samir may still be interested in knowing whether
the difference between two subpopulation means is interesting and
meaningful, but that is not a question of statistical significance in
the sense of testing an inference about population paramters from sample
statistics.  As another poster pointed out, computing the effect size
would be informative.

On the other hand, if the 500 students comprise a sample of students
that was drawn in some way from a larger population, the additional
procedures are justified (either the z-statistic or the one-sample
t-test).

On possibility is that the 500 students were a sample drawn from the
population of 1,000.  That is, there are 1,000 students, and Samir has
drawn a subsample of 500 of them.  Samir may be interested in knowing
whether the sampling process that he used was unbiased.  Assuming that
he knows not only the mean but the population standard deviation from
the population of 1,000, he can compute the distribution of sampling
means with n=500 and apply the z-statistic the calculate the likelihood
of obtaining the observed sample mean if the sampling process was random
and unbiased.

Another possibility is that the sample 500 cases were drawn from some
population other than the 1,000 cases.  This is the scenario that I had
in mind when I posted that the one-sample t-test would be justified.
In this instance, Samir would be interested in testing the hypothesis
that the sample of 500 students was drawn from a population whose mean
was equal to the mean of the population of 1,000 cases.

My conclusion is that Richard and I are both right, and so are you.

Cheers,

Stephen Brand

P.S. I think that I might use this example as an extra-credit question
on my next stats exam :)



Jan Spousta Wrote:

Now it is my turn to support Richard a bit :-)

If the 1,000 cases were the whole available population and 500 of them
were students, the the one-sample procedure would be still
_unjustified_, because then both the population mean and the
subpoplation mean are constants and it is nonsense to test the
difference between two constants. If the two are different, then the
difference is always "significant" in the exact meaning of the word.

The interesting case is when the sampled population is _almost_ the
whole available population (e.g. five students from the 500 are missing)
, but then the statistics starts to be rather complicated and you still
cannot use the "standard" techniques under Compare Means in SPSS. Ask
Marta, she will tell you...

Jan

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Statisticsdoc
Sent: Tuesday, December 19, 2006 10:42 PM
To: [hidden email]
Subject: Sample Means

Stephen Brand
www.statisticsdoc.com

Richard,

Thanks for citing me on both sides of this discussion :)   Let me say a
little more about why I would accept that 1,000 cases can constitute a
population, and under what conditions.

It is not too hard to imagine population definitions that encompass
small numbers of people (e.g., all of the left-handed residents of the
town of Exeter, Rhode Island; the Fall 2006 intake of a small college).

The question of whether you accept that 1,000 cases make up a population
depends on the definition of the population.  If these 1,000 cases are
all of the potential members of the population, then the mean of those
cases constitutes the population mean.  Whatever random processes might
have influence the mean score of that population, that score is the
population parameter.  We are not trying to estimate a parameter of a
wider population from which we have obtained the 1,000 cases.  In this
instance, the one-sample procedure is justified.

Granted, you might say that the left-handed residents of Exeter, or the
2006 intake of a small college, constitute a sub-set of your population
of interest, but then I think that you have to allow that these cases do
not exhaust the potential membership of the population (which might
constitute the left-handed population of Rhode Island, or the various
cohorts of potential incoming first year students), and then your means
become sample statistics, not population parameters.   BTW, in this
instance, the Exeter sample is not a very random one :)

It all depends on where the boundaries of the population are drawn.

Best,

Stephen Brand



Stephen Brand

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Richard Ristow
Sent: Tuesday, December 19, 2006 2:59 PM
To: [hidden email]
Subject: Re: Significant difference - Means

To weigh in with two comments:

At 03:54 AM 12/19/2006, Spousta Jan wrote:

>The error of that 3.5 is about sqrt(1/1000) = 0,03 while the error of
>2.9 for students is about sqrt(1/500) = 0.045. That is both errors are
>of the same order of magnitude and the population error cannot be
>neglected in this case.

I'd like to second, and emphasize, this. Jan is clearly right here,
where the two groups are the same size. However, the same thing holds
when the sizes are quite different.

First, the t-test algorithm correctly allows for the increased precision
in measuring the mean in the larger group. Replacing it by a constant
only 'gains' you a little precision you don't really have.

Second, inequality of group size matters less than one might think.
Roughly, precision goes as the square root of sample size. (Under 'nice'
conditions, that's exact: standard error of estimate goes as the square
root of sample size.) That means increasing the sample size ten-fold
leaves the SEE still 1/3 of the size it had - quite a long way from
letting it be considered a constant.

And at 10:42 AM 12/19/2006, Statisticsdoc (Stephen Brand) wrote:

>If your population consists of the 1000 students, then the mean of 3.5
>is a population parameter, and you would be justified is using the
>one-sample t-test suggested by John.

(This won't be quite fair to Stephen Brand, who'd also written "Formally
one should test the null hypothesis that the two samples have the same
mean, by using the independent groups t-test.")

There's a philosophical position, which I agree with, that will hardly
ever accept something like "[my] population consists of the 1000
students." The argument is that, even if those 1,000 students are all
you've ever seen or ever will see, their observed values constitute a
set generated by an underlying random mechanism, and that randomness
must be allowed for in estimation exactly as if you were aware of
100,000 similar students.

('Generated by an underlying random mechanism' is sometimes expressed as
'drawn from a conceptually infinite population.' However, while this is
technically accurate, I don't blame anyone who considers a 'conceptually
infinite population' a very odd notion.)


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.15.25/593 - Release Date:
12/19/2006 1:17 PM

--
For personalized and experienced consulting in statistics and research
design, visit www.statisticsdoc.com

--
For personalized and experienced consulting in statistics and research
design, visit www.statisticsdoc.com
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Samir Omerovic
Hi again to all,

Thank you all for contributing to this discussion. I must admit that some of posts I had difficulty
to follow so my reply comes late. Anyway after read all the posts I figure out that I did not find
the solution to my problem, or rather I find so many I can not pick one that works best. Independent
T test, Z-test, One sample test, Cohan d... they all seem not to work for me since I have not read
about these tests used in situation like mine.
Let me ask this again: If I have survey done with 1000 respondents and if among these 1000 I got 500
students (and the rest 500 are not students). If the total of 1000 respondents have mean value of
X,X and 500 students have mean value of Y,Y. I am wondering if there is a test that can tell me if
these two mean values are significantly different. My problem lies in the fact that the mean X,X
(1000 respondents) has been calculated with 500 students included so the two are obviously
dependent.
One of my friends suggested the following: Since I have answers at 7-point scale, I could maybe use
Chi-square. If I take the frequency of answers of 1000 respondents as expected values and the
frequency of answers of 500 students as test values. The problem is the same. Frequencies of 500
students are included in frequencies of 1000 respondents. And is it ok to use chi-square here or not
at all?

So... do not know what to do.

Thanks once more to all

Samir




-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Spousta Jan
Sent: Wednesday, December 20, 2006 5:20 PM
To: [hidden email]
Subject: Re: Sample Means

Yes, you are true, Stephen, from the general point of wiew. It is really
interesting how many different scenarios can be hidden in such a simple
situation. But if we return to the original question...

> what test should I use if I need to assess the significance of the
mean of some
> subgroup in comparison the mean of the total population. For example I
have 1000
> respondents that answered a question on 7-point scale. Their
mean/average is 3,5.
> I have 500 students among those 1000 respondents with mean/average
2,9. I want to
> know if these values are significantly different.

...then it seems to me (taking into account both my and Samir's somehow
limited knowledge of English) that the 1000 persons - probably a sample
from a much bigger population - are both students and non-students, and
that these 500 just happened to be students - they answered Yes when
asked "Are you a student?", while the others answered No. Therefore some
of the scenarios seem not applicable here.

But the real problem is, of course, how to help Samir in his first steps
over the deep swamp of applied statistics and not confuse him even more
than needed...

Greetings,

Jan


-----Original Message-----
From: Statisticsdoc [mailto:[hidden email]]
Sent: Wednesday, December 20, 2006 4:57 PM
To: [hidden email]
Cc: Spousta Jan
Subject: Re: Sample Means

Stephen Brand
www.statisticsdoc.com

Jan,

I think there is still some confusion about the research question that
Samir is trying to answer, and even the sampling design.   I will try to
cover the various possibilities.

Your posting is quite correct if we assume that there are only 500
students in the population of 1,000 cases  - i.e. the 1000 cases are
made of 500 students and 500 non-students).  If in fact the larger
population of 1000 cases contains only 500 students, then there is no
need to utilize inferential statistics - there is nothing to infer.
There is no null hypothesis to test, both means are constants, and what
you say is correct.  Samir may still be interested in knowing whether
the difference between two subpopulation means is interesting and
meaningful, but that is not a question of statistical significance in
the sense of testing an inference about population paramters from sample
statistics.  As another poster pointed out, computing the effect size
would be informative.

On the other hand, if the 500 students comprise a sample of students
that was drawn in some way from a larger population, the additional
procedures are justified (either the z-statistic or the one-sample
t-test).

On possibility is that the 500 students were a sample drawn from the
population of 1,000.  That is, there are 1,000 students, and Samir has
drawn a subsample of 500 of them.  Samir may be interested in knowing
whether the sampling process that he used was unbiased.  Assuming that
he knows not only the mean but the population standard deviation from
the population of 1,000, he can compute the distribution of sampling
means with n=500 and apply the z-statistic the calculate the likelihood
of obtaining the observed sample mean if the sampling process was random
and unbiased.

Another possibility is that the sample 500 cases were drawn from some
population other than the 1,000 cases.  This is the scenario that I had
in mind when I posted that the one-sample t-test would be justified.
In this instance, Samir would be interested in testing the hypothesis
that the sample of 500 students was drawn from a population whose mean
was equal to the mean of the population of 1,000 cases.

My conclusion is that Richard and I are both right, and so are you.

Cheers,

Stephen Brand

P.S. I think that I might use this example as an extra-credit question
on my next stats exam :)



Jan Spousta Wrote:

Now it is my turn to support Richard a bit :-)

If the 1,000 cases were the whole available population and 500 of them
were students, the the one-sample procedure would be still
_unjustified_, because then both the population mean and the
subpoplation mean are constants and it is nonsense to test the
difference between two constants. If the two are different, then the
difference is always "significant" in the exact meaning of the word.

The interesting case is when the sampled population is _almost_ the
whole available population (e.g. five students from the 500 are missing)
, but then the statistics starts to be rather complicated and you still
cannot use the "standard" techniques under Compare Means in SPSS. Ask
Marta, she will tell you...

Jan

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Statisticsdoc
Sent: Tuesday, December 19, 2006 10:42 PM
To: [hidden email]
Subject: Sample Means

Stephen Brand
www.statisticsdoc.com

Richard,

Thanks for citing me on both sides of this discussion :)   Let me say a
little more about why I would accept that 1,000 cases can constitute a
population, and under what conditions.

It is not too hard to imagine population definitions that encompass
small numbers of people (e.g., all of the left-handed residents of the
town of Exeter, Rhode Island; the Fall 2006 intake of a small college).

The question of whether you accept that 1,000 cases make up a population
depends on the definition of the population.  If these 1,000 cases are
all of the potential members of the population, then the mean of those
cases constitutes the population mean.  Whatever random processes might
have influence the mean score of that population, that score is the
population parameter.  We are not trying to estimate a parameter of a
wider population from which we have obtained the 1,000 cases.  In this
instance, the one-sample procedure is justified.

Granted, you might say that the left-handed residents of Exeter, or the
2006 intake of a small college, constitute a sub-set of your population
of interest, but then I think that you have to allow that these cases do
not exhaust the potential membership of the population (which might
constitute the left-handed population of Rhode Island, or the various
cohorts of potential incoming first year students), and then your means
become sample statistics, not population parameters.   BTW, in this
instance, the Exeter sample is not a very random one :)

It all depends on where the boundaries of the population are drawn.

Best,

Stephen Brand



Stephen Brand

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Richard Ristow
Sent: Tuesday, December 19, 2006 2:59 PM
To: [hidden email]
Subject: Re: Significant difference - Means

To weigh in with two comments:

At 03:54 AM 12/19/2006, Spousta Jan wrote:

>The error of that 3.5 is about sqrt(1/1000) = 0,03 while the error of
>2.9 for students is about sqrt(1/500) = 0.045. That is both errors are
>of the same order of magnitude and the population error cannot be
>neglected in this case.

I'd like to second, and emphasize, this. Jan is clearly right here,
where the two groups are the same size. However, the same thing holds
when the sizes are quite different.

First, the t-test algorithm correctly allows for the increased precision
in measuring the mean in the larger group. Replacing it by a constant
only 'gains' you a little precision you don't really have.

Second, inequality of group size matters less than one might think.
Roughly, precision goes as the square root of sample size. (Under 'nice'
conditions, that's exact: standard error of estimate goes as the square
root of sample size.) That means increasing the sample size ten-fold
leaves the SEE still 1/3 of the size it had - quite a long way from
letting it be considered a constant.

And at 10:42 AM 12/19/2006, Statisticsdoc (Stephen Brand) wrote:

>If your population consists of the 1000 students, then the mean of 3.5
>is a population parameter, and you would be justified is using the
>one-sample t-test suggested by John.

(This won't be quite fair to Stephen Brand, who'd also written "Formally
one should test the null hypothesis that the two samples have the same
mean, by using the independent groups t-test.")

There's a philosophical position, which I agree with, that will hardly
ever accept something like "[my] population consists of the 1000
students." The argument is that, even if those 1,000 students are all
you've ever seen or ever will see, their observed values constitute a
set generated by an underlying random mechanism, and that randomness
must be allowed for in estimation exactly as if you were aware of
100,000 similar students.

('Generated by an underlying random mechanism' is sometimes expressed as
'drawn from a conceptually infinite population.' However, while this is
technically accurate, I don't blame anyone who considers a 'conceptually
infinite population' a very odd notion.)


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.15.25/593 - Release Date:
12/19/2006 1:17 PM

--
For personalized and experienced consulting in statistics and research
design, visit www.statisticsdoc.com

--
For personalized and experienced consulting in statistics and research
design, visit www.statisticsdoc.com
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Spousta Jan
In reply to this post by statisticsdoc
Once again, Samir, use Independent samples T-test for the two groups, 500 students and 500 others. The most important premisses:

* The 1000 respondents were randomly selected from a much more people (=your population of concern) and you wish make an inference about the whole population population from this random sample of 1000 people. That is, you ask whether in the whole population the mean of all students is different from the mean of the whole population.

* Answers of the respondents are independent (each of them was asked independently, they are not relatives etc.)

* The distribution of answers in both groups is approximately Gaussian (of course it cannot be exactly Gaussian because of the discrete nature of your scale, but in practice we use these variables as Gaussian unless there are huge deviations from the normality)

Then you can apply the T-test for the difference of means between the two groups of 500. That is you test whether the mean of students is different from the mean of other people.

If the answer is YES, and e.g.

MeanStud > MeanOthers,

then it is clear that also

MeanStud > MeanAll

Proof:

MeanAll = (N_Stud * MeanStud + N_Others * MeanOthers) / (N_Stud + N_Others) = (N_Stud * MeanStud + N_Others * MeanOthers) / N

And therefore

MeanAll = MeanStud * ((N - N_Others) / N) + MeanOthers * (N_Others / N)
  = MeanStud - (MeanStud - MeanOthers) * (N_Others / N)

And because (MeanStud - MeanOthers) * (N_Others / N) is a positive number (we know it from the premise MeanStud > MeanOthers), we proved that MeanStud > MeanAll.

Q.E.D.

Similarly we can prove that if MeanStud < MeanOthers, then MeanStud < MeanAll and if MeanStud = MeanOthers, then MeanStud = MeanAll. So the T-test answers directly your question and solves the problem of dependent means.

In doubt, believe Richard Ristow.

Best wishes,

Jan

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Samir Omerović
Sent: Thursday, December 21, 2006 9:01 AM
To: [hidden email]
Subject: Re: Sample Means

Hi again to all,

Thank you all for contributing to this discussion. I must admit that some of posts I had difficulty to follow so my reply comes late. Anyway after read all the posts I figure out that I did not find the solution to my problem, or rather I find so many I can not pick one that works best. Independent T test, Z-test, One sample test, Cohan d... they all seem not to work for me since I have not read about these tests used in situation like mine.
Let me ask this again: If I have survey done with 1000 respondents and if among these 1000 I got 500 students (and the rest 500 are not students). If the total of 1000 respondents have mean value of X,X and 500 students have mean value of Y,Y. I am wondering if there is a test that can tell me if these two mean values are significantly different. My problem lies in the fact that the mean X,X (1000 respondents) has been calculated with 500 students included so the two are obviously dependent.
One of my friends suggested the following: Since I have answers at 7-point scale, I could maybe use Chi-square. If I take the frequency of answers of 1000 respondents as expected values and the frequency of answers of 500 students as test values. The problem is the same. Frequencies of 500 students are included in frequencies of 1000 respondents. And is it ok to use chi-square here or not at all?

So... do not know what to do.

Thanks once more to all

Samir




-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Spousta Jan
Sent: Wednesday, December 20, 2006 5:20 PM
To: [hidden email]
Subject: Re: Sample Means

Yes, you are true, Stephen, from the general point of wiew. It is really interesting how many different scenarios can be hidden in such a simple situation. But if we return to the original question...

> what test should I use if I need to assess the significance of the
mean of some
> subgroup in comparison the mean of the total population. For example I
have 1000
> respondents that answered a question on 7-point scale. Their
mean/average is 3,5.
> I have 500 students among those 1000 respondents with mean/average
2,9. I want to
> know if these values are significantly different.

...then it seems to me (taking into account both my and Samir's somehow limited knowledge of English) that the 1000 persons - probably a sample from a much bigger population - are both students and non-students, and that these 500 just happened to be students - they answered Yes when asked "Are you a student?", while the others answered No. Therefore some of the scenarios seem not applicable here.

But the real problem is, of course, how to help Samir in his first steps over the deep swamp of applied statistics and not confuse him even more than needed...

Greetings,

Jan


-----Original Message-----
From: Statisticsdoc [mailto:[hidden email]]
Sent: Wednesday, December 20, 2006 4:57 PM
To: [hidden email]
Cc: Spousta Jan
Subject: Re: Sample Means

Stephen Brand
www.statisticsdoc.com

Jan,

I think there is still some confusion about the research question that
Samir is trying to answer, and even the sampling design.   I will try to
cover the various possibilities.

Your posting is quite correct if we assume that there are only 500 students in the population of 1,000 cases  - i.e. the 1000 cases are made of 500 students and 500 non-students).  If in fact the larger population of 1000 cases contains only 500 students, then there is no need to utilize inferential statistics - there is nothing to infer.
There is no null hypothesis to test, both means are constants, and what you say is correct.  Samir may still be interested in knowing whether the difference between two subpopulation means is interesting and meaningful, but that is not a question of statistical significance in the sense of testing an inference about population paramters from sample statistics.  As another poster pointed out, computing the effect size would be informative.

On the other hand, if the 500 students comprise a sample of students that was drawn in some way from a larger population, the additional procedures are justified (either the z-statistic or the one-sample t-test).

On possibility is that the 500 students were a sample drawn from the population of 1,000.  That is, there are 1,000 students, and Samir has drawn a subsample of 500 of them.  Samir may be interested in knowing whether the sampling process that he used was unbiased.  Assuming that he knows not only the mean but the population standard deviation from the population of 1,000, he can compute the distribution of sampling means with n=500 and apply the z-statistic the calculate the likelihood of obtaining the observed sample mean if the sampling process was random and unbiased.

Another possibility is that the sample 500 cases were drawn from some population other than the 1,000 cases.  This is the scenario that I had in mind when I posted that the one-sample t-test would be justified.
In this instance, Samir would be interested in testing the hypothesis that the sample of 500 students was drawn from a population whose mean was equal to the mean of the population of 1,000 cases.

My conclusion is that Richard and I are both right, and so are you.

Cheers,

Stephen Brand

P.S. I think that I might use this example as an extra-credit question on my next stats exam :)



Jan Spousta Wrote:

Now it is my turn to support Richard a bit :-)

If the 1,000 cases were the whole available population and 500 of them were students, the the one-sample procedure would be still _unjustified_, because then both the population mean and the subpoplation mean are constants and it is nonsense to test the difference between two constants. If the two are different, then the difference is always "significant" in the exact meaning of the word.

The interesting case is when the sampled population is _almost_ the whole available population (e.g. five students from the 500 are missing) , but then the statistics starts to be rather complicated and you still cannot use the "standard" techniques under Compare Means in SPSS. Ask Marta, she will tell you...

Jan

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Statisticsdoc
Sent: Tuesday, December 19, 2006 10:42 PM
To: [hidden email]
Subject: Sample Means

Stephen Brand
www.statisticsdoc.com

Richard,

Thanks for citing me on both sides of this discussion :)   Let me say a
little more about why I would accept that 1,000 cases can constitute a population, and under what conditions.

It is not too hard to imagine population definitions that encompass small numbers of people (e.g., all of the left-handed residents of the town of Exeter, Rhode Island; the Fall 2006 intake of a small college).

The question of whether you accept that 1,000 cases make up a population depends on the definition of the population.  If these 1,000 cases are all of the potential members of the population, then the mean of those cases constitutes the population mean.  Whatever random processes might have influence the mean score of that population, that score is the population parameter.  We are not trying to estimate a parameter of a wider population from which we have obtained the 1,000 cases.  In this instance, the one-sample procedure is justified.

Granted, you might say that the left-handed residents of Exeter, or the
2006 intake of a small college, constitute a sub-set of your population of interest, but then I think that you have to allow that these cases do not exhaust the potential membership of the population (which might constitute the left-handed population of Rhode Island, or the various cohorts of potential incoming first year students), and then your means
become sample statistics, not population parameters.   BTW, in this
instance, the Exeter sample is not a very random one :)

It all depends on where the boundaries of the population are drawn.

Best,

Stephen Brand



Stephen Brand

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Richard Ristow
Sent: Tuesday, December 19, 2006 2:59 PM
To: [hidden email]
Subject: Re: Significant difference - Means

To weigh in with two comments:

At 03:54 AM 12/19/2006, Spousta Jan wrote:

>The error of that 3.5 is about sqrt(1/1000) = 0,03 while the error of
>2.9 for students is about sqrt(1/500) = 0.045. That is both errors are
>of the same order of magnitude and the population error cannot be
>neglected in this case.

I'd like to second, and emphasize, this. Jan is clearly right here, where the two groups are the same size. However, the same thing holds when the sizes are quite different.

First, the t-test algorithm correctly allows for the increased precision in measuring the mean in the larger group. Replacing it by a constant only 'gains' you a little precision you don't really have.

Second, inequality of group size matters less than one might think.
Roughly, precision goes as the square root of sample size. (Under 'nice'
conditions, that's exact: standard error of estimate goes as the square root of sample size.) That means increasing the sample size ten-fold leaves the SEE still 1/3 of the size it had - quite a long way from letting it be considered a constant.

And at 10:42 AM 12/19/2006, Statisticsdoc (Stephen Brand) wrote:

>If your population consists of the 1000 students, then the mean of 3.5
>is a population parameter, and you would be justified is using the
>one-sample t-test suggested by John.

(This won't be quite fair to Stephen Brand, who'd also written "Formally one should test the null hypothesis that the two samples have the same mean, by using the independent groups t-test.")

There's a philosophical position, which I agree with, that will hardly ever accept something like "[my] population consists of the 1000 students." The argument is that, even if those 1,000 students are all you've ever seen or ever will see, their observed values constitute a set generated by an underlying random mechanism, and that randomness must be allowed for in estimation exactly as if you were aware of 100,000 similar students.

('Generated by an underlying random mechanism' is sometimes expressed as 'drawn from a conceptually infinite population.' However, while this is technically accurate, I don't blame anyone who considers a 'conceptually infinite population' a very odd notion.)


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.15.25/593 - Release Date:
12/19/2006 1:17 PM

--
For personalized and experienced consulting in statistics and research design, visit www.statisticsdoc.com

--
For personalized and experienced consulting in statistics and research design, visit www.statisticsdoc.com
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Arthur Kramer
In reply to this post by Samir Omerovic
Samir,

If I were you I would separate the students from the other subjects in the
dataset. Make 2 new files--one with students only; one without students--and
create a new variable in each file that identifies which file it is and
populate that field with an identifier for the file. That is not difficult;
it can be done with a " select if" syntax statement and use whatever you
have that identifies students,  or using a drop-down from "data"  on the
toolbar. Just remember to name any new files with new names do you don't
destroy your original file. (I am don't use version 14 yet so I don't know
how to use the two together, I would merge these two files (using another
new file name so as to not destroy the files I just created)). Then I could
do any test with these two groups and they will be independent of each
other.  I still think that with files of this size any difference in means
will be statistically significant, and effect size is the way to go with
this type of analysis.

The chi-square is a good way to go because your data are essentially
non-parametric (i.e., ordinal) in nature, unless you can document that the
"space" between 1 and 2, 3 and 4, etc. are based on a common metric among
all of your subjects.

Arthur Kramer, Ph.D.
Director of Institutional Research
New Jersey City University
-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Samir Omerovic
Sent: Thursday, December 21, 2006 3:01 AM
To: [hidden email]
Subject: Re: Sample Means

Hi again to all,

Thank you all for contributing to this discussion. I must admit that some of
posts I had difficulty
to follow so my reply comes late. Anyway after read all the posts I figure
out that I did not find
the solution to my problem, or rather I find so many I can not pick one that
works best. Independent
T test, Z-test, One sample test, Cohan d... they all seem not to work for me
since I have not read
about these tests used in situation like mine.
Let me ask this again: If I have survey done with 1000 respondents and if
among these 1000 I got 500
students (and the rest 500 are not students). If the total of 1000
respondents have mean value of
X,X and 500 students have mean value of Y,Y. I am wondering if there is a
test that can tell me if
these two mean values are significantly different. My problem lies in the
fact that the mean X,X
(1000 respondents) has been calculated with 500 students included so the two
are obviously
dependent.
One of my friends suggested the following: Since I have answers at 7-point
scale, I could maybe use
Chi-square. If I take the frequency of answers of 1000 respondents as
expected values and the
frequency of answers of 500 students as test values. The problem is the
same. Frequencies of 500
students are included in frequencies of 1000 respondents. And is it ok to
use chi-square here or not
at all?

So... do not know what to do.

Thanks once more to all

Samir




-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Spousta Jan
Sent: Wednesday, December 20, 2006 5:20 PM
To: [hidden email]
Subject: Re: Sample Means

Yes, you are true, Stephen, from the general point of wiew. It is really
interesting how many different scenarios can be hidden in such a simple
situation. But if we return to the original question...

> what test should I use if I need to assess the significance of the
mean of some
> subgroup in comparison the mean of the total population. For example I
have 1000
> respondents that answered a question on 7-point scale. Their
mean/average is 3,5.
> I have 500 students among those 1000 respondents with mean/average
2,9. I want to
> know if these values are significantly different.

...then it seems to me (taking into account both my and Samir's somehow
limited knowledge of English) that the 1000 persons - probably a sample
from a much bigger population - are both students and non-students, and
that these 500 just happened to be students - they answered Yes when
asked "Are you a student?", while the others answered No. Therefore some
of the scenarios seem not applicable here.

But the real problem is, of course, how to help Samir in his first steps
over the deep swamp of applied statistics and not confuse him even more
than needed...

Greetings,

Jan


-----Original Message-----
From: Statisticsdoc [mailto:[hidden email]]
Sent: Wednesday, December 20, 2006 4:57 PM
To: [hidden email]
Cc: Spousta Jan
Subject: Re: Sample Means

Stephen Brand
www.statisticsdoc.com

Jan,

I think there is still some confusion about the research question that
Samir is trying to answer, and even the sampling design.   I will try to
cover the various possibilities.

Your posting is quite correct if we assume that there are only 500
students in the population of 1,000 cases  - i.e. the 1000 cases are
made of 500 students and 500 non-students).  If in fact the larger
population of 1000 cases contains only 500 students, then there is no
need to utilize inferential statistics - there is nothing to infer.
There is no null hypothesis to test, both means are constants, and what
you say is correct.  Samir may still be interested in knowing whether
the difference between two subpopulation means is interesting and
meaningful, but that is not a question of statistical significance in
the sense of testing an inference about population paramters from sample
statistics.  As another poster pointed out, computing the effect size
would be informative.

On the other hand, if the 500 students comprise a sample of students
that was drawn in some way from a larger population, the additional
procedures are justified (either the z-statistic or the one-sample
t-test).

On possibility is that the 500 students were a sample drawn from the
population of 1,000.  That is, there are 1,000 students, and Samir has
drawn a subsample of 500 of them.  Samir may be interested in knowing
whether the sampling process that he used was unbiased.  Assuming that
he knows not only the mean but the population standard deviation from
the population of 1,000, he can compute the distribution of sampling
means with n=500 and apply the z-statistic the calculate the likelihood
of obtaining the observed sample mean if the sampling process was random
and unbiased.

Another possibility is that the sample 500 cases were drawn from some
population other than the 1,000 cases.  This is the scenario that I had
in mind when I posted that the one-sample t-test would be justified.
In this instance, Samir would be interested in testing the hypothesis
that the sample of 500 students was drawn from a population whose mean
was equal to the mean of the population of 1,000 cases.

My conclusion is that Richard and I are both right, and so are you.

Cheers,

Stephen Brand

P.S. I think that I might use this example as an extra-credit question
on my next stats exam :)



Jan Spousta Wrote:

Now it is my turn to support Richard a bit :-)

If the 1,000 cases were the whole available population and 500 of them
were students, the the one-sample procedure would be still
_unjustified_, because then both the population mean and the
subpoplation mean are constants and it is nonsense to test the
difference between two constants. If the two are different, then the
difference is always "significant" in the exact meaning of the word.

The interesting case is when the sampled population is _almost_ the
whole available population (e.g. five students from the 500 are missing)
, but then the statistics starts to be rather complicated and you still
cannot use the "standard" techniques under Compare Means in SPSS. Ask
Marta, she will tell you...

Jan

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Statisticsdoc
Sent: Tuesday, December 19, 2006 10:42 PM
To: [hidden email]
Subject: Sample Means

Stephen Brand
www.statisticsdoc.com

Richard,

Thanks for citing me on both sides of this discussion :)   Let me say a
little more about why I would accept that 1,000 cases can constitute a
population, and under what conditions.

It is not too hard to imagine population definitions that encompass
small numbers of people (e.g., all of the left-handed residents of the
town of Exeter, Rhode Island; the Fall 2006 intake of a small college).

The question of whether you accept that 1,000 cases make up a population
depends on the definition of the population.  If these 1,000 cases are
all of the potential members of the population, then the mean of those
cases constitutes the population mean.  Whatever random processes might
have influence the mean score of that population, that score is the
population parameter.  We are not trying to estimate a parameter of a
wider population from which we have obtained the 1,000 cases.  In this
instance, the one-sample procedure is justified.

Granted, you might say that the left-handed residents of Exeter, or the
2006 intake of a small college, constitute a sub-set of your population
of interest, but then I think that you have to allow that these cases do
not exhaust the potential membership of the population (which might
constitute the left-handed population of Rhode Island, or the various
cohorts of potential incoming first year students), and then your means
become sample statistics, not population parameters.   BTW, in this
instance, the Exeter sample is not a very random one :)

It all depends on where the boundaries of the population are drawn.

Best,

Stephen Brand



Stephen Brand

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Richard Ristow
Sent: Tuesday, December 19, 2006 2:59 PM
To: [hidden email]
Subject: Re: Significant difference - Means

To weigh in with two comments:

At 03:54 AM 12/19/2006, Spousta Jan wrote:

>The error of that 3.5 is about sqrt(1/1000) = 0,03 while the error of
>2.9 for students is about sqrt(1/500) = 0.045. That is both errors are
>of the same order of magnitude and the population error cannot be
>neglected in this case.

I'd like to second, and emphasize, this. Jan is clearly right here,
where the two groups are the same size. However, the same thing holds
when the sizes are quite different.

First, the t-test algorithm correctly allows for the increased precision
in measuring the mean in the larger group. Replacing it by a constant
only 'gains' you a little precision you don't really have.

Second, inequality of group size matters less than one might think.
Roughly, precision goes as the square root of sample size. (Under 'nice'
conditions, that's exact: standard error of estimate goes as the square
root of sample size.) That means increasing the sample size ten-fold
leaves the SEE still 1/3 of the size it had - quite a long way from
letting it be considered a constant.

And at 10:42 AM 12/19/2006, Statisticsdoc (Stephen Brand) wrote:

>If your population consists of the 1000 students, then the mean of 3.5
>is a population parameter, and you would be justified is using the
>one-sample t-test suggested by John.

(This won't be quite fair to Stephen Brand, who'd also written "Formally
one should test the null hypothesis that the two samples have the same
mean, by using the independent groups t-test.")

There's a philosophical position, which I agree with, that will hardly
ever accept something like "[my] population consists of the 1000
students." The argument is that, even if those 1,000 students are all
you've ever seen or ever will see, their observed values constitute a
set generated by an underlying random mechanism, and that randomness
must be allowed for in estimation exactly as if you were aware of
100,000 similar students.

('Generated by an underlying random mechanism' is sometimes expressed as
'drawn from a conceptually infinite population.' However, while this is
technically accurate, I don't blame anyone who considers a 'conceptually
infinite population' a very odd notion.)


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.15.25/593 - Release Date:
12/19/2006 1:17 PM

--
For personalized and experienced consulting in statistics and research
design, visit www.statisticsdoc.com

--
For personalized and experienced consulting in statistics and research
design, visit www.statisticsdoc.com
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Arthur Kramer
In reply to this post by Samir Omerovic
I guess I included some superfluous steps in my last post since you already
have a field to identify who is a student and who isn't.  My point was to
look at the groups independent of one another.

One thing that hasn't been addressed, though, is that as the number of
degrees of freedom gets large, e.g., exceeding 120, the critical values of z
and t are identical--especially with samples the size you have (just look at
the tables in any introductory statistic book).  So it really doesn't which
of these tests you use.

Another effect size you could examine is the correlation coefficient, which
might be more meaningful, if you are going to use parametric statistics.

Arthur Kramer

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Samir Omerovic
Sent: Thursday, December 21, 2006 3:01 AM
To: [hidden email]
Subject: Re: Sample Means

Hi again to all,

Thank you all for contributing to this discussion. I must admit that some of
posts I had difficulty
to follow so my reply comes late. Anyway after read all the posts I figure
out that I did not find
the solution to my problem, or rather I find so many I can not pick one that
works best. Independent
T test, Z-test, One sample test, Cohan d... they all seem not to work for me
since I have not read
about these tests used in situation like mine.
Let me ask this again: If I have survey done with 1000 respondents and if
among these 1000 I got 500
students (and the rest 500 are not students). If the total of 1000
respondents have mean value of
X,X and 500 students have mean value of Y,Y. I am wondering if there is a
test that can tell me if
these two mean values are significantly different. My problem lies in the
fact that the mean X,X
(1000 respondents) has been calculated with 500 students included so the two
are obviously
dependent.
One of my friends suggested the following: Since I have answers at 7-point
scale, I could maybe use
Chi-square. If I take the frequency of answers of 1000 respondents as
expected values and the
frequency of answers of 500 students as test values. The problem is the
same. Frequencies of 500
students are included in frequencies of 1000 respondents. And is it ok to
use chi-square here or not
at all?

So... do not know what to do.

Thanks once more to all

Samir




-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Spousta Jan
Sent: Wednesday, December 20, 2006 5:20 PM
To: [hidden email]
Subject: Re: Sample Means

Yes, you are true, Stephen, from the general point of wiew. It is really
interesting how many different scenarios can be hidden in such a simple
situation. But if we return to the original question...

> what test should I use if I need to assess the significance of the
mean of some
> subgroup in comparison the mean of the total population. For example I
have 1000
> respondents that answered a question on 7-point scale. Their
mean/average is 3,5.
> I have 500 students among those 1000 respondents with mean/average
2,9. I want to
> know if these values are significantly different.

...then it seems to me (taking into account both my and Samir's somehow
limited knowledge of English) that the 1000 persons - probably a sample
from a much bigger population - are both students and non-students, and
that these 500 just happened to be students - they answered Yes when
asked "Are you a student?", while the others answered No. Therefore some
of the scenarios seem not applicable here.

But the real problem is, of course, how to help Samir in his first steps
over the deep swamp of applied statistics and not confuse him even more
than needed...

Greetings,

Jan


-----Original Message-----
From: Statisticsdoc [mailto:[hidden email]]
Sent: Wednesday, December 20, 2006 4:57 PM
To: [hidden email]
Cc: Spousta Jan
Subject: Re: Sample Means

Stephen Brand
www.statisticsdoc.com

Jan,

I think there is still some confusion about the research question that
Samir is trying to answer, and even the sampling design.   I will try to
cover the various possibilities.

Your posting is quite correct if we assume that there are only 500
students in the population of 1,000 cases  - i.e. the 1000 cases are
made of 500 students and 500 non-students).  If in fact the larger
population of 1000 cases contains only 500 students, then there is no
need to utilize inferential statistics - there is nothing to infer.
There is no null hypothesis to test, both means are constants, and what
you say is correct.  Samir may still be interested in knowing whether
the difference between two subpopulation means is interesting and
meaningful, but that is not a question of statistical significance in
the sense of testing an inference about population paramters from sample
statistics.  As another poster pointed out, computing the effect size
would be informative.

On the other hand, if the 500 students comprise a sample of students
that was drawn in some way from a larger population, the additional
procedures are justified (either the z-statistic or the one-sample
t-test).

On possibility is that the 500 students were a sample drawn from the
population of 1,000.  That is, there are 1,000 students, and Samir has
drawn a subsample of 500 of them.  Samir may be interested in knowing
whether the sampling process that he used was unbiased.  Assuming that
he knows not only the mean but the population standard deviation from
the population of 1,000, he can compute the distribution of sampling
means with n=500 and apply the z-statistic the calculate the likelihood
of obtaining the observed sample mean if the sampling process was random
and unbiased.

Another possibility is that the sample 500 cases were drawn from some
population other than the 1,000 cases.  This is the scenario that I had
in mind when I posted that the one-sample t-test would be justified.
In this instance, Samir would be interested in testing the hypothesis
that the sample of 500 students was drawn from a population whose mean
was equal to the mean of the population of 1,000 cases.

My conclusion is that Richard and I are both right, and so are you.

Cheers,

Stephen Brand

P.S. I think that I might use this example as an extra-credit question
on my next stats exam :)



Jan Spousta Wrote:

Now it is my turn to support Richard a bit :-)

If the 1,000 cases were the whole available population and 500 of them
were students, the the one-sample procedure would be still
_unjustified_, because then both the population mean and the
subpoplation mean are constants and it is nonsense to test the
difference between two constants. If the two are different, then the
difference is always "significant" in the exact meaning of the word.

The interesting case is when the sampled population is _almost_ the
whole available population (e.g. five students from the 500 are missing)
, but then the statistics starts to be rather complicated and you still
cannot use the "standard" techniques under Compare Means in SPSS. Ask
Marta, she will tell you...

Jan

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Statisticsdoc
Sent: Tuesday, December 19, 2006 10:42 PM
To: [hidden email]
Subject: Sample Means

Stephen Brand
www.statisticsdoc.com

Richard,

Thanks for citing me on both sides of this discussion :)   Let me say a
little more about why I would accept that 1,000 cases can constitute a
population, and under what conditions.

It is not too hard to imagine population definitions that encompass
small numbers of people (e.g., all of the left-handed residents of the
town of Exeter, Rhode Island; the Fall 2006 intake of a small college).

The question of whether you accept that 1,000 cases make up a population
depends on the definition of the population.  If these 1,000 cases are
all of the potential members of the population, then the mean of those
cases constitutes the population mean.  Whatever random processes might
have influence the mean score of that population, that score is the
population parameter.  We are not trying to estimate a parameter of a
wider population from which we have obtained the 1,000 cases.  In this
instance, the one-sample procedure is justified.

Granted, you might say that the left-handed residents of Exeter, or the
2006 intake of a small college, constitute a sub-set of your population
of interest, but then I think that you have to allow that these cases do
not exhaust the potential membership of the population (which might
constitute the left-handed population of Rhode Island, or the various
cohorts of potential incoming first year students), and then your means
become sample statistics, not population parameters.   BTW, in this
instance, the Exeter sample is not a very random one :)

It all depends on where the boundaries of the population are drawn.

Best,

Stephen Brand



Stephen Brand

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Richard Ristow
Sent: Tuesday, December 19, 2006 2:59 PM
To: [hidden email]
Subject: Re: Significant difference - Means

To weigh in with two comments:

At 03:54 AM 12/19/2006, Spousta Jan wrote:

>The error of that 3.5 is about sqrt(1/1000) = 0,03 while the error of
>2.9 for students is about sqrt(1/500) = 0.045. That is both errors are
>of the same order of magnitude and the population error cannot be
>neglected in this case.

I'd like to second, and emphasize, this. Jan is clearly right here,
where the two groups are the same size. However, the same thing holds
when the sizes are quite different.

First, the t-test algorithm correctly allows for the increased precision
in measuring the mean in the larger group. Replacing it by a constant
only 'gains' you a little precision you don't really have.

Second, inequality of group size matters less than one might think.
Roughly, precision goes as the square root of sample size. (Under 'nice'
conditions, that's exact: standard error of estimate goes as the square
root of sample size.) That means increasing the sample size ten-fold
leaves the SEE still 1/3 of the size it had - quite a long way from
letting it be considered a constant.

And at 10:42 AM 12/19/2006, Statisticsdoc (Stephen Brand) wrote:

>If your population consists of the 1000 students, then the mean of 3.5
>is a population parameter, and you would be justified is using the
>one-sample t-test suggested by John.

(This won't be quite fair to Stephen Brand, who'd also written "Formally
one should test the null hypothesis that the two samples have the same
mean, by using the independent groups t-test.")

There's a philosophical position, which I agree with, that will hardly
ever accept something like "[my] population consists of the 1000
students." The argument is that, even if those 1,000 students are all
you've ever seen or ever will see, their observed values constitute a
set generated by an underlying random mechanism, and that randomness
must be allowed for in estimation exactly as if you were aware of
100,000 similar students.

('Generated by an underlying random mechanism' is sometimes expressed as
'drawn from a conceptually infinite population.' However, while this is
technically accurate, I don't blame anyone who considers a 'conceptually
infinite population' a very odd notion.)


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.15.25/593 - Release Date:
12/19/2006 1:17 PM

--
For personalized and experienced consulting in statistics and research
design, visit www.statisticsdoc.com

--
For personalized and experienced consulting in statistics and research
design, visit www.statisticsdoc.com
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Rick Bello
In reply to this post by statisticsdoc
I would respectfully disagree with Jan's suggestion of using a t-test as
the assumption of independence is violated.  Although within each group
the samples are independent (each student or non-student is entered only
once) the students contribute to both groups.  I would tend to agree with
Samir's suggestion of using a chi square goodness of fit test here.

Rick Bello, MD, PhD
Albert Einstein College of Medicine
New York, NY
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Spousta Jan
In reply to this post by statisticsdoc
Just one very basic remark: Chi-square tests weren't created for testing
differences in means, but for testing differences in shapes of discrete
distributions. It is always better to use scissors for cutting paper and
not for screwing :-)

In other words, it is possible to have very significand Chi-square test
and zero differences in means.

Jan

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Arthur Kramer
Sent: Thursday, December 21, 2006 3:36 PM
To: [hidden email]
Subject: Re: Sample Means

Samir,

If I were you I would separate the students from the other subjects in
the dataset. Make 2 new files--one with students only; one without
students--and create a new variable in each file that identifies which
file it is and populate that field with an identifier for the file. That
is not difficult; it can be done with a " select if" syntax statement
and use whatever you have that identifies students,  or using a
drop-down from "data"  on the toolbar. Just remember to name any new
files with new names do you don't destroy your original file. (I am
don't use version 14 yet so I don't know how to use the two together, I
would merge these two files (using another new file name so as to not
destroy the files I just created)). Then I could do any test with these
two groups and they will be independent of each other.  I still think
that with files of this size any difference in means will be
statistically significant, and effect size is the way to go with this
type of analysis.

The chi-square is a good way to go because your data are essentially
non-parametric (i.e., ordinal) in nature, unless you can document that
the "space" between 1 and 2, 3 and 4, etc. are based on a common metric
among all of your subjects.

Arthur Kramer, Ph.D.
Director of Institutional Research
New Jersey City University
-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Samir Omerovic
Sent: Thursday, December 21, 2006 3:01 AM
To: [hidden email]
Subject: Re: Sample Means

Hi again to all,

Thank you all for contributing to this discussion. I must admit that
some of posts I had difficulty to follow so my reply comes late. Anyway
after read all the posts I figure out that I did not find the solution
to my problem, or rather I find so many I can not pick one that works
best. Independent T test, Z-test, One sample test, Cohan d... they all
seem not to work for me since I have not read about these tests used in
situation like mine.
Let me ask this again: If I have survey done with 1000 respondents and
if among these 1000 I got 500 students (and the rest 500 are not
students). If the total of 1000 respondents have mean value of X,X and
500 students have mean value of Y,Y. I am wondering if there is a test
that can tell me if these two mean values are significantly different.
My problem lies in the fact that the mean X,X (1000 respondents) has
been calculated with 500 students included so the two are obviously
dependent.
One of my friends suggested the following: Since I have answers at
7-point scale, I could maybe use Chi-square. If I take the frequency of
answers of 1000 respondents as expected values and the frequency of
answers of 500 students as test values. The problem is the same.
Frequencies of 500 students are included in frequencies of 1000
respondents. And is it ok to use chi-square here or not at all?

So... do not know what to do.

Thanks once more to all

Samir




-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Spousta Jan
Sent: Wednesday, December 20, 2006 5:20 PM
To: [hidden email]
Subject: Re: Sample Means

Yes, you are true, Stephen, from the general point of wiew. It is really
interesting how many different scenarios can be hidden in such a simple
situation. But if we return to the original question...

> what test should I use if I need to assess the significance of the
mean of some
> subgroup in comparison the mean of the total population. For example I
have 1000
> respondents that answered a question on 7-point scale. Their
mean/average is 3,5.
> I have 500 students among those 1000 respondents with mean/average
2,9. I want to
> know if these values are significantly different.

...then it seems to me (taking into account both my and Samir's somehow
limited knowledge of English) that the 1000 persons - probably a sample
from a much bigger population - are both students and non-students, and
that these 500 just happened to be students - they answered Yes when
asked "Are you a student?", while the others answered No. Therefore some
of the scenarios seem not applicable here.

But the real problem is, of course, how to help Samir in his first steps
over the deep swamp of applied statistics and not confuse him even more
than needed...

Greetings,

Jan


-----Original Message-----
From: Statisticsdoc [mailto:[hidden email]]
Sent: Wednesday, December 20, 2006 4:57 PM
To: [hidden email]
Cc: Spousta Jan
Subject: Re: Sample Means

Stephen Brand
www.statisticsdoc.com

Jan,

I think there is still some confusion about the research question that
Samir is trying to answer, and even the sampling design.   I will try to
cover the various possibilities.

Your posting is quite correct if we assume that there are only 500
students in the population of 1,000 cases  - i.e. the 1000 cases are
made of 500 students and 500 non-students).  If in fact the larger
population of 1000 cases contains only 500 students, then there is no
need to utilize inferential statistics - there is nothing to infer.
There is no null hypothesis to test, both means are constants, and what
you say is correct.  Samir may still be interested in knowing whether
the difference between two subpopulation means is interesting and
meaningful, but that is not a question of statistical significance in
the sense of testing an inference about population paramters from sample
statistics.  As another poster pointed out, computing the effect size
would be informative.

On the other hand, if the 500 students comprise a sample of students
that was drawn in some way from a larger population, the additional
procedures are justified (either the z-statistic or the one-sample
t-test).

On possibility is that the 500 students were a sample drawn from the
population of 1,000.  That is, there are 1,000 students, and Samir has
drawn a subsample of 500 of them.  Samir may be interested in knowing
whether the sampling process that he used was unbiased.  Assuming that
he knows not only the mean but the population standard deviation from
the population of 1,000, he can compute the distribution of sampling
means with n=500 and apply the z-statistic the calculate the likelihood
of obtaining the observed sample mean if the sampling process was random
and unbiased.

Another possibility is that the sample 500 cases were drawn from some
population other than the 1,000 cases.  This is the scenario that I had
in mind when I posted that the one-sample t-test would be justified.
In this instance, Samir would be interested in testing the hypothesis
that the sample of 500 students was drawn from a population whose mean
was equal to the mean of the population of 1,000 cases.

My conclusion is that Richard and I are both right, and so are you.

Cheers,

Stephen Brand

P.S. I think that I might use this example as an extra-credit question
on my next stats exam :)



Jan Spousta Wrote:

Now it is my turn to support Richard a bit :-)

If the 1,000 cases were the whole available population and 500 of them
were students, the the one-sample procedure would be still
_unjustified_, because then both the population mean and the
subpoplation mean are constants and it is nonsense to test the
difference between two constants. If the two are different, then the
difference is always "significant" in the exact meaning of the word.

The interesting case is when the sampled population is _almost_ the
whole available population (e.g. five students from the 500 are missing)
, but then the statistics starts to be rather complicated and you still
cannot use the "standard" techniques under Compare Means in SPSS. Ask
Marta, she will tell you...

Jan

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Statisticsdoc
Sent: Tuesday, December 19, 2006 10:42 PM
To: [hidden email]
Subject: Sample Means

Stephen Brand
www.statisticsdoc.com

Richard,

Thanks for citing me on both sides of this discussion :)   Let me say a
little more about why I would accept that 1,000 cases can constitute a
population, and under what conditions.

It is not too hard to imagine population definitions that encompass
small numbers of people (e.g., all of the left-handed residents of the
town of Exeter, Rhode Island; the Fall 2006 intake of a small college).

The question of whether you accept that 1,000 cases make up a population
depends on the definition of the population.  If these 1,000 cases are
all of the potential members of the population, then the mean of those
cases constitutes the population mean.  Whatever random processes might
have influence the mean score of that population, that score is the
population parameter.  We are not trying to estimate a parameter of a
wider population from which we have obtained the 1,000 cases.  In this
instance, the one-sample procedure is justified.

Granted, you might say that the left-handed residents of Exeter, or the
2006 intake of a small college, constitute a sub-set of your population
of interest, but then I think that you have to allow that these cases do
not exhaust the potential membership of the population (which might
constitute the left-handed population of Rhode Island, or the various
cohorts of potential incoming first year students), and then your means
become sample statistics, not population parameters.   BTW, in this
instance, the Exeter sample is not a very random one :)

It all depends on where the boundaries of the population are drawn.

Best,

Stephen Brand



Stephen Brand

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Richard Ristow
Sent: Tuesday, December 19, 2006 2:59 PM
To: [hidden email]
Subject: Re: Significant difference - Means

To weigh in with two comments:

At 03:54 AM 12/19/2006, Spousta Jan wrote:

>The error of that 3.5 is about sqrt(1/1000) = 0,03 while the error of
>2.9 for students is about sqrt(1/500) = 0.045. That is both errors are
>of the same order of magnitude and the population error cannot be
>neglected in this case.

I'd like to second, and emphasize, this. Jan is clearly right here,
where the two groups are the same size. However, the same thing holds
when the sizes are quite different.

First, the t-test algorithm correctly allows for the increased precision
in measuring the mean in the larger group. Replacing it by a constant
only 'gains' you a little precision you don't really have.

Second, inequality of group size matters less than one might think.
Roughly, precision goes as the square root of sample size. (Under 'nice'
conditions, that's exact: standard error of estimate goes as the square
root of sample size.) That means increasing the sample size ten-fold
leaves the SEE still 1/3 of the size it had - quite a long way from
letting it be considered a constant.

And at 10:42 AM 12/19/2006, Statisticsdoc (Stephen Brand) wrote:

>If your population consists of the 1000 students, then the mean of 3.5
>is a population parameter, and you would be justified is using the
>one-sample t-test suggested by John.

(This won't be quite fair to Stephen Brand, who'd also written "Formally
one should test the null hypothesis that the two samples have the same
mean, by using the independent groups t-test.")

There's a philosophical position, which I agree with, that will hardly
ever accept something like "[my] population consists of the 1000
students." The argument is that, even if those 1,000 students are all
you've ever seen or ever will see, their observed values constitute a
set generated by an underlying random mechanism, and that randomness
must be allowed for in estimation exactly as if you were aware of
100,000 similar students.

('Generated by an underlying random mechanism' is sometimes expressed as
'drawn from a conceptually infinite population.' However, while this is
technically accurate, I don't blame anyone who considers a 'conceptually
infinite population' a very odd notion.)


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.432 / Virus Database: 268.15.25/593 - Release Date:
12/19/2006 1:17 PM

--
For personalized and experienced consulting in statistics and research
design, visit www.statisticsdoc.com

--
For personalized and experienced consulting in statistics and research
design, visit www.statisticsdoc.com
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Spousta Jan
In reply to this post by statisticsdoc
I understand your concern, Rick, but I suggested to test the two
independet subgroups against each other (students vs. non-students)
which is clearly OK if all other premises of T-test hold.

And then I proved that if these two subgroups are significantly
different, then also students vs. all people ares significantly
different regardless of the dependency, that is the test can be extended
in this way.

As I already wrote, the chi-square test is inappropriate because it does
not test differences of means but differences of shapes. And as Samir
wrote, it faces the same problem: students vs. all are not independent
:-)

Therefore I respectfully disagree with your suggestion and wish you (and
all other list members) merry Christmas and happy new Year.

Jan


-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Rick Bello
Sent: Thursday, December 21, 2006 3:56 PM
To: [hidden email]
Subject: Re: Sample Means

I would respectfully disagree with Jan's suggestion of using a t-test as
the assumption of independence is violated.  Although within each group
the samples are independent (each student or non-student is entered only
once) the students contribute to both groups.  I would tend to agree
with Samir's suggestion of using a chi square goodness of fit test here.

Rick Bello, MD, PhD
Albert Einstein College of Medicine
New York, NY
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

statisticsdoc
In reply to this post by Samir Omerovic
Stephen Brand
www.statisticsdoc.com

Samir,

Thank you for the update and clarification of your research question.  In
the following, I am assuming that the 1,000 cases are a sample, not a
population, and that this sample of 1,000 cases can be divided into 500
students and 500 non-students.  If I have misunderstood you, please let me
know.

I would suggest that you focus on the question of whether the mean answer of
the 500 students differs from the mean answer of the 500 non-students.
Subject to certain conditions, the best way to do this is to carry out an
independent samples t-test.  This would test the null hypothesis that the
student and non-student samples are drawn from a population in which there
is no difference between the student and non-student mean.  If you reject
the null hypothesis, your findings support the view that being a student
versus non-student makes a difference to survey responses.

Strictly speaking, carrying out the independent samples t-test is not quite
the same thing as testing whether the student sample came from a population
in which the mean response was 3.5 (i.e., the mean of the overall sample of
1,000).  I suggest that the comparison of the student sample with the
overall sample mean would not be an informative question to pursue in this
study.  The value of 3.5 is not a population parameter.  It would not be
valid to use the overall sample mean of 3.5 as an estimate of the overall
population parameter, and then test whether the student sub-sample differed
from this value, because the student sub-sample contributed to the
estimation of the overall population mean (which means you might fail to
find a difference between the mean of the student sub-sample and the mean of
the general population when in fact one might exist).  There are scenarios
in which you could use a one-sample t-test, or z-test, to test the
difference between a student sample and an overall population mean, but they
do not apply to the dataset that you describe below.  The fact that the
1,000 cases includes the student sub-sample, and that the 1,000 cases do not
comprise a population, rules out these analyses.  Above all, you can look at
the influence of student status on the survey item by comparing students and
non-students.

As far as the mechanics of carrying out an independent groups t-test go, you
need to start with all 1,000 cases in one datset, with a variable that
denotes which cases are students and which are not.  For the sake of
discussion, we will call this variable STUDENT and assign the value of 1 if
you are a student and 0 if you are not.  Lets call the variable that
contains ratings of the survey scale SAMIR01 .

T-TEST GROUPS = student (0,1)  / variables = SAMIR01.

A critical issue in running the t-test concerns the equality of variances
between the two groups. The output will show you Levene's test for equality
of variances (if this is significant, then the assumption of equal variances
should not be made). The output will also present significance tests with
and without the assumption of equality of variances.

A few other suggestions:

What I have described above assumes that the student and non-student samples
are independent.  If the student and non-student samples are in some way
matched (i.e., the survey was given to the student and  a younger
non-student sibling or to a parent), then it is possible to increase the
statistical power of the analysis by carrying out a paired-samples t-test
(statistical power means you have a better chance of rejecting the null
hypothesis when there is a difference).  If this is the case, look into
running a paired-sample t-test.

Look at the distribution of the survey ratings in the two samples.  If your
variables are not normally distributed - if they deviate a great deal from
normality - you can consider a non-parametric alternative.

Don't use the chi-squared to test the difference in the distribution of the
samples.  Samples can have different distributions but equivalent means.
Also, you are quite right to note that the inclusion of the 500 in the 1,000
poses a problem for the chi-squared analysis - this violates the assumption
in Chi-square that the observations are independent.

HTH,

Stephen Brand


For personalized and professional consultation in statistics and research
design, visit
www.statisticsdoc.com


-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]]On Behalf Of
Samir Omerović
Sent: Thursday, December 21, 2006 3:01 AM
To: [hidden email]
Subject: Re: Sample Means


Hi again to all,

Thank you all for contributing to this discussion. I must admit that some of
posts I had difficulty
to follow so my reply comes late. Anyway after read all the posts I figure
out that I did not find
the solution to my problem, or rather I find so many I can not pick one that
works best. Independent
T test, Z-test, One sample test, Cohan d... they all seem not to work for me
since I have not read
about these tests used in situation like mine.
Let me ask this again: If I have survey done with 1000 respondents and if
among these 1000 I got 500
students (and the rest 500 are not students). If the total of 1000
respondents have mean value of
X,X and 500 students have mean value of Y,Y. I am wondering if there is a
test that can tell me if
these two mean values are significantly different. My problem lies in the
fact that the mean X,X
(1000 respondents) has been calculated with 500 students included so the two
are obviously
dependent.
One of my friends suggested the following: Since I have answers at 7-point
scale, I could maybe use
Chi-square. If I take the frequency of answers of 1000 respondents as
expected values and the
frequency of answers of 500 students as test values. The problem is the
same. Frequencies of 500
students are included in frequencies of 1000 respondents. And is it ok to
use chi-square here or not
at all?

So... do not know what to do.

Thanks once more to all

Samir
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Richard Ristow
In reply to this post by Samir Omerovic
At 03:01 AM 12/21/2006, Samir_Omerovi=E6?= wrote:

>I must admit that some of posts I had difficulty
>to follow so my reply comes late.

Yeah, well, we went down a bunch of paths, didn't we?

>I figure out that I did not find the solution to
>my problem, or rather I find so many I can not
>pick one that works best. Independent T test,
>Z-test, One sample test, Cohan d...

Well, at 11:56 AM 12/21/2006, Stephen Brand
(Statisticsdoc) suggested reformulating your question:

>I would suggest that you focus on the question
>of whether the mean answer of the 500 students
>differs from the mean answer of the 500 non-students.

instead your original question,

>What test should I use if I need to assess the
>significance of the mean of some subgroup in
>comparison the mean of the total population.

I think this is right: compare the two parts, not
one part with the whole. If you don't agree, ask
again, and we'll see what we can say.

To compare the two separate groups, the
independent-samples t-test is the best answer, as
Stephen Brand and Jan Spousta have both recommended.

Stephen wrote that using the t-test is "subject
to certain conditions," which it is, but in your
case those conditions will apply plenty well enough.

I'd say you can stop reading here, and go ahead.
....................
To speak to some other points that have been raised,

A) We talked a lot about whether your 1,000
should be considered the 'population' in the
statistical since. You just wrote,

>I have survey done with 1000 respondents and if
>among these 1000 I got 500 students (and the
>rest 500 are not students).

So the answer is clear: Your 1,000 are *NOT* the
population, in the sense we've been talking
about. They're a sample (with two sub-samples,
students and not). Our discussion about what
constitutes a 'population' is important in
general, but it's not relevant to you.

B) At 09:49 AM 12/21/2006, Arthur Kramer wrote:

>Another effect size you could examine is the correlation coefficient,

Arthur, can you help? *I* don't understand this
one. Correlation of what with what? The only
possibility I can see is the correlation of
response with the student/non student binary
variable, and I'd think testing for difference of
group means would be much more meaningful.

C) At 09:56 AM 12/21/2006, Rick Bello wrote:

>I would respectfully disagree with Jan's
>suggestion of using a t-test as the assumption
>of independence is violated.  Although within
>each group the samples are independent (each
>student or non-student is entered only once) the
>students contribute to both groups.

Quite right; but reformulating as a
between-groups comparison is probably the best solution.

D) Samir asked,

>Since I have answers at 7-point scale, I could
>maybe use Chi-square. If I take the frequency of
>answers of 1000 respondents as expected values
>and the frequency of answers of 500 students as
>test values. The problem is the same.
>Frequencies of 500 students are included in
>frequencies of 1000 respondents. And is it ok to
>use chi-square here or not at all?

That wouldn't solve your problem. As for the
t-test, you'd need to compare the students with
the (500) non-students, not with the whole 1,000.
That is, you need to compare the two separate
parts, not one part with the whole.

(As Jan Spousta wrote, the chi-square tests for a
different set of effects than the t-test does.
Sometimes comparing the results of the two is
illuminating, but that's for a further discussion sometime.)

E) Regarding this, at 09:35 AM 12/21/2006, Arthur Kramer wrote:

>The chi-square is a good way to go because your
>data are essentially non-parametric (i.e.,
>ordinal) in nature, unless you can document that
>the "space" between 1 and 2, 3 and 4, etc. are
>based on a common metric among all of your subjects.

True; but in cases like this, a rough-and-ready
assumption they're about equal usually works well enough.

If you don't trust that, use an ordinal
non-parametric test, not chi-square. Marta
García-Granero has prepared some tutorials on
non-parametric methods, including these.

-Good luck to you,
  Richard Ristow
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Arthur Kramer
Richard, et. al.,

Since Samir is assuming the responses constitute a scale, I am suggesting
correlate the students scores on the scale with the non-student scores on
the scale.  If Samir takes the advice of performing an independent groups
t-test with the dependent variable being the scale score difference between
the two groups, that same variable can yield a Pearson correlation obtained
by using group membership as a dummy coded predictor regressed onto the
scale score.  That might also be more appropriate with the large n, because
as I said, with this many subjects any difference is apt to obtain
significance.  The correlation is just another way of measuring effect size,
isn't it?

If he does want to go the non-parametric route, a Mann-Whitney U may provide
some insight into the "scale" score.  Doing a Chi-Square may necessitate
multiple goodness-fit analyses:  percent students saying "1" compared to the
percentage of non-student saying "1", etc. up to seven--do you protect your
type 1 error then?

Arthur Kramer

-----Original Message-----
From: Richard Ristow [mailto:[hidden email]]
Sent: Thursday, December 21, 2006 4:32 PM
To: Samir Omerovi; [hidden email]
Cc: Statisticsdoc; Spousta Jan; Arthur Kramer; Rick Bello
Subject: Re: Sample Means

At 03:01 AM 12/21/2006, Samir_Omerovi=E6?= wrote:

>I must admit that some of posts I had difficulty
>to follow so my reply comes late.

Yeah, well, we went down a bunch of paths, didn't we?

>I figure out that I did not find the solution to
>my problem, or rather I find so many I can not
>pick one that works best. Independent T test,
>Z-test, One sample test, Cohan d...

Well, at 11:56 AM 12/21/2006, Stephen Brand
(Statisticsdoc) suggested reformulating your question:

>I would suggest that you focus on the question
>of whether the mean answer of the 500 students
>differs from the mean answer of the 500 non-students.

instead your original question,

>What test should I use if I need to assess the
>significance of the mean of some subgroup in
>comparison the mean of the total population.

I think this is right: compare the two parts, not
one part with the whole. If you don't agree, ask
again, and we'll see what we can say.

To compare the two separate groups, the
independent-samples t-test is the best answer, as
Stephen Brand and Jan Spousta have both recommended.

Stephen wrote that using the t-test is "subject
to certain conditions," which it is, but in your
case those conditions will apply plenty well enough.

I'd say you can stop reading here, and go ahead.
....................
To speak to some other points that have been raised,

A) We talked a lot about whether your 1,000
should be considered the 'population' in the
statistical since. You just wrote,

>I have survey done with 1000 respondents and if
>among these 1000 I got 500 students (and the
>rest 500 are not students).

So the answer is clear: Your 1,000 are *NOT* the
population, in the sense we've been talking
about. They're a sample (with two sub-samples,
students and not). Our discussion about what
constitutes a 'population' is important in
general, but it's not relevant to you.

B) At 09:49 AM 12/21/2006, Arthur Kramer wrote:

>Another effect size you could examine is the correlation coefficient,

Arthur, can you help? *I* don't understand this
one. Correlation of what with what? The only
possibility I can see is the correlation of
response with the student/non student binary
variable, and I'd think testing for difference of
group means would be much more meaningful.

C) At 09:56 AM 12/21/2006, Rick Bello wrote:

>I would respectfully disagree with Jan's
>suggestion of using a t-test as the assumption
>of independence is violated.  Although within
>each group the samples are independent (each
>student or non-student is entered only once) the
>students contribute to both groups.

Quite right; but reformulating as a
between-groups comparison is probably the best solution.

D) Samir asked,

>Since I have answers at 7-point scale, I could
>maybe use Chi-square. If I take the frequency of
>answers of 1000 respondents as expected values
>and the frequency of answers of 500 students as
>test values. The problem is the same.
>Frequencies of 500 students are included in
>frequencies of 1000 respondents. And is it ok to
>use chi-square here or not at all?

That wouldn't solve your problem. As for the
t-test, you'd need to compare the students with
the (500) non-students, not with the whole 1,000.
That is, you need to compare the two separate
parts, not one part with the whole.

(As Jan Spousta wrote, the chi-square tests for a
different set of effects than the t-test does.
Sometimes comparing the results of the two is
illuminating, but that's for a further discussion sometime.)

E) Regarding this, at 09:35 AM 12/21/2006, Arthur Kramer wrote:

>The chi-square is a good way to go because your
>data are essentially non-parametric (i.e.,
>ordinal) in nature, unless you can document that
>the "space" between 1 and 2, 3 and 4, etc. are
>based on a common metric among all of your subjects.

True; but in cases like this, a rough-and-ready
assumption they're about equal usually works well enough.

If you don't trust that, use an ordinal
non-parametric test, not chi-square. Marta
García-Granero has prepared some tutorials on
non-parametric methods, including these.

-Good luck to you,
  Richard Ristow

Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Richard Ristow
At 05:15 PM 12/21/2006, Arthur Kramer wrote:

>Since Samir is assuming the responses constitute a scale, I am
>suggesting correlate the students scores on the scale with the
>non-student scores on the scale.

You may have mis-phrased this. You can't correlate the values of one
variable between two groups; a correlation is of two variables over one
set of observations. Now,

>[The problem] can yield a Pearson correlation obtained by using group
>membership as a dummy coded predictor [correlated with] the scale
>score.  That might also be more appropriate with the large n, because
>as I said, with this many subjects any difference is apt to obtain
>significance.  The correlation is just another way of measuring effect
>size, isn't it?

Well, relative effect size. Since this is an ANOVA problem (the
independent-samples t-test is the one-factor, two-level special case of
ANOVA), one might want to give R^2 (the square of your correlation
coefficient), i.e. percent of variance explained, to follow standard
ANOVA terminology.

I said, 'relative effect size'. For absolute effect size, you'd
estimate a confidence interval for the difference of the group means:
"the 95% confidence interval for the difference is 0.8 to 2.1 scale
levels."

>If he does want to go the non-parametric route, a Mann-Whitney U may
>provide some insight into the "scale" score.

Marta has warned about Mann-Whitney U when there are few (like 7)
possible values of the dependent variable, so there will be many ties.
I haven't worked this up from her tutorials, but I'd look at them
before using Mann-Whitney incautiously. (I'd used it, incautiously, in
many cases like this, before Marta's warning.)

>Doing a Chi-Square may necessitate multiple goodness-fit
>analyses:  percent students saying "1" compared to the percentage of
>non-student saying "1", etc. up to seven--do you protect your type 1
>error then?

Well, the classic, basic, chi-square doesn't; it tests against the
single null hypothesis that the two categorical variables are
statistically independent.

But if the test non-independent, there's the question which cells are
most affected. For that, using the SPSS CROSSTABS procedure (and
borrowing from an earlier thread),

>Try adding, to subcommand CELLS, EXPECTED and ASRESID.
>
>EXPECTED is the expected number in the cell, if the null hypothesis is
>correct, and ASRESID is the adjusted standardized difference from
>observed and expected. Look for cells with ASRESID greater than 2 in
>absolute value; that will show cells where the difference is most
>important.
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Samir Omerovic
I just wanted to thank Richard, Jan, John, Arthur, Rick, Stephen and all others for comments and
suggestions.  For the record, I decided to go on with Independent T-test:)

Merry Christmas and Happy New Year to all

Thanks once more

Regards from Bosnia

Samir


-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Richard Ristow
Sent: Thursday, December 21, 2006 11:48 PM
To: [hidden email]
Subject: Re: Sample Means

At 05:15 PM 12/21/2006, Arthur Kramer wrote:

>Since Samir is assuming the responses constitute a scale, I am
>suggesting correlate the students scores on the scale with the
>non-student scores on the scale.

You may have mis-phrased this. You can't correlate the values of one
variable between two groups; a correlation is of two variables over one
set of observations. Now,

>[The problem] can yield a Pearson correlation obtained by using group
>membership as a dummy coded predictor [correlated with] the scale
>score.  That might also be more appropriate with the large n, because
>as I said, with this many subjects any difference is apt to obtain
>significance.  The correlation is just another way of measuring effect
>size, isn't it?

Well, relative effect size. Since this is an ANOVA problem (the
independent-samples t-test is the one-factor, two-level special case of
ANOVA), one might want to give R^2 (the square of your correlation
coefficient), i.e. percent of variance explained, to follow standard
ANOVA terminology.

I said, 'relative effect size'. For absolute effect size, you'd
estimate a confidence interval for the difference of the group means:
"the 95% confidence interval for the difference is 0.8 to 2.1 scale
levels."

>If he does want to go the non-parametric route, a Mann-Whitney U may
>provide some insight into the "scale" score.

Marta has warned about Mann-Whitney U when there are few (like 7)
possible values of the dependent variable, so there will be many ties.
I haven't worked this up from her tutorials, but I'd look at them
before using Mann-Whitney incautiously. (I'd used it, incautiously, in
many cases like this, before Marta's warning.)

>Doing a Chi-Square may necessitate multiple goodness-fit
>analyses:  percent students saying "1" compared to the percentage of
>non-student saying "1", etc. up to seven--do you protect your type 1
>error then?

Well, the classic, basic, chi-square doesn't; it tests against the
single null hypothesis that the two categorical variables are
statistically independent.

But if the test non-independent, there's the question which cells are
most affected. For that, using the SPSS CROSSTABS procedure (and
borrowing from an earlier thread),

>Try adding, to subcommand CELLS, EXPECTED and ASRESID.
>
>EXPECTED is the expected number in the cell, if the null hypothesis is
>correct, and ASRESID is the adjusted standardized difference from
>observed and expected. Look for cells with ASRESID greater than 2 in
>absolute value; that will show cells where the difference is most
>important.
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

Arthur Kramer
In reply to this post by Richard Ristow
Confidence intervals, as denoted by the name are ranges, and are used to
estimate the parameters from the statistics; they get wider or narrower
depending on how much confidence one wants in estimating the "true"
difference.  In Samir's case it will be the difference between the two
sample means (because he says he's going to use an independent groups t).  I
hope he provides and interprets the 95% C.I. and eta or d if (when!?) his
research obtains significance.

Arthur Kramer

-----Original Message-----
From: Richard Ristow [mailto:[hidden email]]
Sent: Thursday, December 21, 2006 5:48 PM
To: Arthur Kramer; 'Samir Omerovi'; [hidden email]
Cc: 'Statisticsdoc'; 'Spousta Jan'; 'Rick Bello'
Subject: RE: Sample Means

At 05:15 PM 12/21/2006, Arthur Kramer wrote:

>Since Samir is assuming the responses constitute a scale, I am
>suggesting correlate the students scores on the scale with the
>non-student scores on the scale.

You may have mis-phrased this. You can't correlate the values of one
variable between two groups; a correlation is of two variables over one
set of observations. Now,

>[The problem] can yield a Pearson correlation obtained by using group
>membership as a dummy coded predictor [correlated with] the scale
>score.  That might also be more appropriate with the large n, because
>as I said, with this many subjects any difference is apt to obtain
>significance.  The correlation is just another way of measuring effect
>size, isn't it?

Well, relative effect size. Since this is an ANOVA problem (the
independent-samples t-test is the one-factor, two-level special case of
ANOVA), one might want to give R^2 (the square of your correlation
coefficient), i.e. percent of variance explained, to follow standard
ANOVA terminology.

I said, 'relative effect size'. For absolute effect size, you'd
estimate a confidence interval for the difference of the group means:
"the 95% confidence interval for the difference is 0.8 to 2.1 scale
levels."

>If he does want to go the non-parametric route, a Mann-Whitney U may
>provide some insight into the "scale" score.

Marta has warned about Mann-Whitney U when there are few (like 7)
possible values of the dependent variable, so there will be many ties.
I haven't worked this up from her tutorials, but I'd look at them
before using Mann-Whitney incautiously. (I'd used it, incautiously, in
many cases like this, before Marta's warning.)

>Doing a Chi-Square may necessitate multiple goodness-fit
>analyses:  percent students saying "1" compared to the percentage of
>non-student saying "1", etc. up to seven--do you protect your type 1
>error then?

Well, the classic, basic, chi-square doesn't; it tests against the
single null hypothesis that the two categorical variables are
statistically independent.

But if the test non-independent, there's the question which cells are
most affected. For that, using the SPSS CROSSTABS procedure (and
borrowing from an earlier thread),

>Try adding, to subcommand CELLS, EXPECTED and ASRESID.
>
>EXPECTED is the expected number in the cell, if the null hypothesis is
>correct, and ASRESID is the adjusted standardized difference from
>observed and expected. Look for cells with ASRESID greater than 2 in
>absolute value; that will show cells where the difference is most
>important.
Reply | Threaded
Open this post in threaded view
|

Re: Sample Means

JOHN ANTONAKIS
In reply to this post by Samir Omerovic
That is a safe choice to make, based on the clarifications you provided.

Regards,
John.

At 08:24 22.12.2006 +0100, =?iso-8859-2?Q?Samir_Omerovi=E6?= wrote:

>I just wanted to thank Richard, Jan, John, Arthur, Rick, Stephen and all
>others for comments and
>suggestions.  For the record, I decided to go on with Independent T-test:)
>
>Merry Christmas and Happy New Year to all
>
>Thanks once more
>
>Regards from Bosnia
>
>Samir
>
>
>-----Original Message-----
>From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
>Richard Ristow
>Sent: Thursday, December 21, 2006 11:48 PM
>To: [hidden email]
>Subject: Re: Sample Means
>
>At 05:15 PM 12/21/2006, Arthur Kramer wrote:
>
> >Since Samir is assuming the responses constitute a scale, I am
> >suggesting correlate the students scores on the scale with the
> >non-student scores on the scale.
>
>You may have mis-phrased this. You can't correlate the values of one
>variable between two groups; a correlation is of two variables over one
>set of observations. Now,
>
> >[The problem] can yield a Pearson correlation obtained by using group
> >membership as a dummy coded predictor [correlated with] the scale
> >score.  That might also be more appropriate with the large n, because
> >as I said, with this many subjects any difference is apt to obtain
> >significance.  The correlation is just another way of measuring effect
> >size, isn't it?
>
>Well, relative effect size. Since this is an ANOVA problem (the
>independent-samples t-test is the one-factor, two-level special case of
>ANOVA), one might want to give R^2 (the square of your correlation
>coefficient), i.e. percent of variance explained, to follow standard
>ANOVA terminology.
>
>I said, 'relative effect size'. For absolute effect size, you'd
>estimate a confidence interval for the difference of the group means:
>"the 95% confidence interval for the difference is 0.8 to 2.1 scale
>levels."
>
> >If he does want to go the non-parametric route, a Mann-Whitney U may
> >provide some insight into the "scale" score.
>
>Marta has warned about Mann-Whitney U when there are few (like 7)
>possible values of the dependent variable, so there will be many ties.
>I haven't worked this up from her tutorials, but I'd look at them
>before using Mann-Whitney incautiously. (I'd used it, incautiously, in
>many cases like this, before Marta's warning.)
>
> >Doing a Chi-Square may necessitate multiple goodness-fit
> >analyses:  percent students saying "1" compared to the percentage of
> >non-student saying "1", etc. up to seven--do you protect your type 1
> >error then?
>
>Well, the classic, basic, chi-square doesn't; it tests against the
>single null hypothesis that the two categorical variables are
>statistically independent.
>
>But if the test non-independent, there's the question which cells are
>most affected. For that, using the SPSS CROSSTABS procedure (and
>borrowing from an earlier thread),
>
> >Try adding, to subcommand CELLS, EXPECTED and ASRESID.
> >
> >EXPECTED is the expected number in the cell, if the null hypothesis is
> >correct, and ASRESID is the adjusted standardized difference from
> >observed and expected. Look for cells with ASRESID greater than 2 in
> >absolute value; that will show cells where the difference is most
> >important.

___________________________________

Prof. John Antonakis
Faculty of Management and Economics
University of Lausanne
Internef #527
CH-1015 Lausanne-Dorigny
Switzerland

Tel: ++41 (0)21 692-3438
Fax: ++41 (0)21 692-3305

http://www.hec.unil.ch/jantonakis
___________________________________