Brief Conceptual Question

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Brief Conceptual Question

Carrie Margulies
Dear Colleagues,

                         I am a Master's candidate in Psychology, and
I'm currently studying for an exam about statistics and research
methodology. There was just one conceptual question that I could not
seem to find an adequate answer to. The question is as follows:

Explain the relationship between sample size and statistical
significance testing. Why is the size of an F ratio is bigger with a
larger sample size? What does this imply?

                     I very much appreciate any feedback anyone can
offer! Thank you in advance for your responses. Have a wonderful
weekend.

Best,
Carrie Margulies
Reply | Threaded
Open this post in threaded view
|

Re: Brief Conceptual Question

Hector Maletta
Carrie,
First of all I'd advice you to read some book on statistics and research
methodology. It is the best course, all told, for an exam on statistics and
research methodology.
However, that said, this may be useful:
1. The larger the sample, the greater the statistical significance of a
statistical result. This means that the larger the sample, the lower the
chance that the result is just a fluke or chance occurrence.
2. In fact an F ratio is NOT larger for larger sample sizes. The F ratio is
the ratio of explained variance to residual variance, and it does not depend
on sample size. It depends on the proportion of variance in the dependent
variable that is explained by independent or predictor variables. If F is
above a certain minimum value, you can bet (with a certain degree of
confidence) that the proportion of variance explained by your model is not
zero. In any case, for any given degree of confidence, this minimum F ratio
you need for a result to be significant is LOWER with larger samples. So
either the second part of your exam question is a malicious trick by your
teacher, or you have transcribed it wrong.
Hector

-----Mensaje original-----
De: SPSSX(r) Discussion [mailto:[hidden email]] En nombre de
Carrie Margulies
Enviado el: Friday, June 23, 2006 11:43 PM
Para: [hidden email]
Asunto: Brief Conceptual Question

Dear Colleagues,

                         I am a Master's candidate in Psychology, and
I'm currently studying for an exam about statistics and research
methodology. There was just one conceptual question that I could not
seem to find an adequate answer to. The question is as follows:

Explain the relationship between sample size and statistical
significance testing. Why is the size of an F ratio is bigger with a
larger sample size? What does this imply?

                     I very much appreciate any feedback anyone can
offer! Thank you in advance for your responses. Have a wonderful
weekend.

Best,
Carrie Margulies
Reply | Threaded
Open this post in threaded view
|

Re: Brief Conceptual Question

Richard Ristow
To be a little picky -

At 11:26 PM 6/23/2006, Hector Maletta wrote:

See phrase in brackets and caps:

>1. The larger the sample, the greater the statistical significance of
>a statistical result <OF THE SAME OBSERVED MAGNITUDE, WITH THE SAME
>UNEXPLAINED VARIANCE IN THE DATA>. This means that the larger the
>sample, the lower the chance that the result is just a fluke or chance
>occurrence.

>2. If F is above a certain minimum value, you can bet (with a certain
>degree of confidence) that the proportion of variance explained by
>your model is not zero.

Alas, not so; confidence levels tell you something different, and much
less satisfying. What Hector is describing is called the *a posteriori*
probability that you have a false positive result THIS TIME. The
significance level is the *a priori* probability of getting a result
this strong, in the absence of any true underlying effect.
Reply | Threaded
Open this post in threaded view
|

Re: Brief Conceptual Question

Hector Maletta
Picky indeed, Richard. In my first point, of course I am referring to the
same result, only obtained from two samples of different size. In my second
one, the difference is immaterial for the question asked. In both a priori
and a posteriori interpretations the idea is the same for the purpose of the
question. I tried to keep my answer as simple as possible for the benefit of
our colleague asking the question, who may be confused by too many niceties.
Hector

-----Mensaje original-----
De: Richard Ristow [mailto:[hidden email]]
Enviado el: Saturday, June 24, 2006 1:48 AM
Para: Hector Maletta; [hidden email]
Asunto: Re: Brief Conceptual Question

To be a little picky -

At 11:26 PM 6/23/2006, Hector Maletta wrote:

See phrase in brackets and caps:

>1. The larger the sample, the greater the statistical significance of
>a statistical result <OF THE SAME OBSERVED MAGNITUDE, WITH THE SAME
>UNEXPLAINED VARIANCE IN THE DATA>. This means that the larger the
>sample, the lower the chance that the result is just a fluke or chance
>occurrence.

>2. If F is above a certain minimum value, you can bet (with a certain
>degree of confidence) that the proportion of variance explained by
>your model is not zero.

Alas, not so; confidence levels tell you something different, and much
less satisfying. What Hector is describing is called the *a posteriori*
probability that you have a false positive result THIS TIME. The
significance level is the *a priori* probability of getting a result
this strong, in the absence of any true underlying effect.
Reply | Threaded
Open this post in threaded view
|

Re: Brief Conceptual Question

Oliver, Richard
In reply to this post by Carrie Margulies
Okay, I'm not a statistician, so feel free to correct my misconceptions.

In all the undergraduate and graduate statistics courses I took, we were taught that the results are either significant or not. There was no such thing as "more" or "less" or "sort of" or "almost" significant. You pick a significance level and then see what you get. If the p value is at or below that level, then you reject the null hypothesis.

That's arguably excessively anal and perhaps not very practical in the real world (but we are talking about an answer to a question in a graduate stats class), and the same result with samples of differing sizes will yield a lower p value for larger samples; so in that sense you could say the result is "more" significant. But perhaps equally important is that larger samples may yield a "significant" p value (p value less than some arbitrarily selected value), in instances where smaller samples fail to yield a significant p value -- and with sufficiently large samples, almost any difference is statistically significant. But that doesn't necessarily make it meaningful.

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Hector Maletta
Sent: Saturday, June 24, 2006 8:15 AM
To: [hidden email]
Subject: Re: Brief Conceptual Question

Picky indeed, Richard. In my first point, of course I am referring to the
same result, only obtained from two samples of different size. In my second
one, the difference is immaterial for the question asked. In both a priori
and a posteriori interpretations the idea is the same for the purpose of the
question. I tried to keep my answer as simple as possible for the benefit of
our colleague asking the question, who may be confused by too many niceties.
Hector

-----Mensaje original-----
De: Richard Ristow [mailto:[hidden email]]
Enviado el: Saturday, June 24, 2006 1:48 AM
Para: Hector Maletta; [hidden email]
Asunto: Re: Brief Conceptual Question

To be a little picky -

At 11:26 PM 6/23/2006, Hector Maletta wrote:

See phrase in brackets and caps:

>1. The larger the sample, the greater the statistical significance of
>a statistical result <OF THE SAME OBSERVED MAGNITUDE, WITH THE SAME
>UNEXPLAINED VARIANCE IN THE DATA>. This means that the larger the
>sample, the lower the chance that the result is just a fluke or chance
>occurrence.

>2. If F is above a certain minimum value, you can bet (with a certain
>degree of confidence) that the proportion of variance explained by
>your model is not zero.

Alas, not so; confidence levels tell you something different, and much
less satisfying. What Hector is describing is called the *a posteriori*
probability that you have a false positive result THIS TIME. The
significance level is the *a priori* probability of getting a result
this strong, in the absence of any true underlying effect.
Reply | Threaded
Open this post in threaded view
|

Re: Brief Conceptual Question

Hector Maletta
Statistical significance, as Richard wisely reminds us, is not equivalent to
meaningfulness, or substantive significance. Statistical significance just
means that the observed sample difference between something and something
else is not likely to be a fluke while in the whole population the
difference doesn't exist. Given sample size and having chosen a probability
threshold (e.g. p=0.05), statistical tests tell you whether the probability
of getting the observed result by chance is lower or higher than your p. Of
course, a larger sample means you may decide that a smaller difference is
still statistically significant, i.e. that it probably exists also at
population level and not just in your sample. This doesn't make the
difference substantively interesting or meaningful.
Hector


-----Mensaje original-----
De: SPSSX(r) Discussion [mailto:[hidden email]] En nombre de
Oliver, Richard
Enviado el: Sunday, June 25, 2006 2:31 PM
Para: [hidden email]
Asunto: Re: Brief Conceptual Question

Okay, I'm not a statistician, so feel free to correct my misconceptions.

In all the undergraduate and graduate statistics courses I took, we were
taught that the results are either significant or not. There was no such
thing as "more" or "less" or "sort of" or "almost" significant. You pick a
significance level and then see what you get. If the p value is at or below
that level, then you reject the null hypothesis.

That's arguably excessively anal and perhaps not very practical in the real
world (but we are talking about an answer to a question in a graduate stats
class), and the same result with samples of differing sizes will yield a
lower p value for larger samples; so in that sense you could say the result
is "more" significant. But perhaps equally important is that larger samples
may yield a "significant" p value (p value less than some arbitrarily
selected value), in instances where smaller samples fail to yield a
significant p value -- and with sufficiently large samples, almost any
difference is statistically significant. But that doesn't necessarily make
it meaningful.

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Hector Maletta
Sent: Saturday, June 24, 2006 8:15 AM
To: [hidden email]
Subject: Re: Brief Conceptual Question

Picky indeed, Richard. In my first point, of course I am referring to the
same result, only obtained from two samples of different size. In my second
one, the difference is immaterial for the question asked. In both a priori
and a posteriori interpretations the idea is the same for the purpose of the
question. I tried to keep my answer as simple as possible for the benefit of
our colleague asking the question, who may be confused by too many niceties.
Hector

-----Mensaje original-----
De: Richard Ristow [mailto:[hidden email]]
Enviado el: Saturday, June 24, 2006 1:48 AM
Para: Hector Maletta; [hidden email]
Asunto: Re: Brief Conceptual Question

To be a little picky -

At 11:26 PM 6/23/2006, Hector Maletta wrote:

See phrase in brackets and caps:

>1. The larger the sample, the greater the statistical significance of
>a statistical result <OF THE SAME OBSERVED MAGNITUDE, WITH THE SAME
>UNEXPLAINED VARIANCE IN THE DATA>. This means that the larger the
>sample, the lower the chance that the result is just a fluke or chance
>occurrence.

>2. If F is above a certain minimum value, you can bet (with a certain
>degree of confidence) that the proportion of variance explained by
>your model is not zero.

Alas, not so; confidence levels tell you something different, and much
less satisfying. What Hector is describing is called the *a posteriori*
probability that you have a false positive result THIS TIME. The
significance level is the *a priori* probability of getting a result
this strong, in the absence of any true underlying effect.
Reply | Threaded
Open this post in threaded view
|

Re: Brief Conceptual Question

Marta García-Granero
Hi everybody involved in this thread...

I'd like to add some comments

HM> Statistical significance, as Richard wisely reminds us, is not equivalent to
HM> meaningfulness, or substantive significance. Statistical significance just
HM> means that the observed sample difference between something and something
HM> else is not likely to be a fluke while in the whole population the
HM> difference doesn't exist. Given sample size and having chosen a probability
HM> threshold (e.g. p=0.05), statistical tests tell you whether the probability
HM> of getting the observed result by chance is lower or higher than your p. Of
HM> course, a larger sample means you may decide that a smaller difference is
HM> still statistically significant, i.e. that it probably exists also at
HM> population level and not just in your sample. This doesn't make the
HM> difference substantively interesting or meaningful.

I want to emphasize even more your point:  there is a very important
difference between "statistical significance" and "clinical/
biological/ practical..." RELEVANCE. A result can be statistically
significant, but irrelevant. A priori hypotheses and
"extra-statistical" information are important to decide whether a
result is relevant or not. Sometimes, standardised measures of effect
size, like Cohen's d (for means) or eta-square/f (for ANOVA models),
thresholds for r... can be useful to help us in the decision. Common
sense can help, too, of course ;)

Also, the opposite can happen: relevant results can be statistically
non significant, due to sample size (but I'm NOT going to talk in
favour of post-hoc power analysis, never).

HM> Okay, I'm not a statistician, so feel free to correct my misconceptions.

HM> In all the undergraduate and graduate statistics courses I took, we were
HM> taught that the results are either significant or not. There was no such
HM> thing as "more" or "less" or "sort of" or "almost" significant. You pick a
HM> significance level and then see what you get. If the p value is at or below
HM> that level, then you reject the null hypothesis.

If Ronald Fisher were here, he would be weeping. He gave a threshold
of p=0.05 like a simple example (arguing besides against the use of
thresholds), and everybody (me included, sigh!) uses it as the
frontier between success (statistical significance) and failure (non
significance). Things are not black or white in statistics, they cover
all the shadows of grey, and "tendency towards significance" or
"results almost significance" (indicating in both cases that p value
is over 0.05 but below 0.10) are terms used in statiscal language,
Oliver.

Regards

Marta