ANOVAs, Bonferroni and SPSS

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

ANOVAs, Bonferroni and SPSS

kath
I have survey data from gifted pupils in 19 schools concerning their motivation and approaches to learning, these constructs were divided into 10 subscales per construct. I wanted to know if the school mean scores were siginificantly different from each so I conducted 20 one way ANOVAs (in SPSS) one for each subscale although the number of students in schools was between 5 and 17.  Can I justify doing 20 different ANOVAs? Also can i justify doing them with such small numbers of students in each school? i am an ed. psych and it seems in the literature that this is common practice so if anyone knows of any relevant articles thsat also illustrate this i would be happy to hear about them!
When the ANOVA revealed that there were siginificant differences between schools on the subscales I conducted post hoc pairwise multiple comparison tests using the Bonferroni adjustment. I chose 0.05 siginificance level. Does this mean that SPSS automatically adjusted downwards for the comparisons (divided by 18) thus any sig levels in the output that were lower than 0.05 were sig?
Do we only do post hoc when the ANOVA has established differences exist?
I would love to hear your thoughts on this one. Kath    
Reply | Threaded
Open this post in threaded view
|

Re: ANOVAs, Bonferroni and SPSS

Bruce Weaver
Administrator
If you are in educational psychology, you must have access to some good textbooks that discuss the problem of multiple testing, and the distinction between per contrast and family-wise (or experiment-wise) alpha levels.  When you carry out 20 tests with alpha = .05 per test, the family-wise alpha is much higher than .05.  (The exact value depends on the degree to which the tests are independent of each other.)  Jerry Dallal's "Little Handbook of Statistical Practice" has some nice illustrations of the problems that arise when one does not correct for multiple tests.  Look for "A Valuable Lesson" and the bullet points under it.  

    http://www.tufts.edu/~gdallal/LHSP.HTM

For another good illustration, see Craig Bennett's nice poster, which reports results of an fMRI study done on a post-mortem Atlantic salmon.

    http://prefrontal.org/files/posters/Bennett-Salmon-2009.pdf

HTH.


kath wrote
I have survey data from gifted pupils in 19 schools concerning their motivation and approaches to learning, these constructs were divided into 10 subscales per construct. I wanted to know if the school mean scores were siginificantly different from each so I conducted 20 one way ANOVAs (in SPSS) one for each subscale although the number of students in schools was between 5 and 17.  Can I justify doing 20 different ANOVAs? Also can i justify doing them with such small numbers of students in each school? i am an ed. psych and it seems in the literature that this is common practice so if anyone knows of any relevant articles thsat also illustrate this i would be happy to hear about them!
When the ANOVA revealed that there were siginificant differences between schools on the subscales I conducted post hoc pairwise multiple comparison tests using the Bonferroni adjustment. I chose 0.05 siginificance level. Does this mean that SPSS automatically adjusted downwards for the comparisons (divided by 18) thus any sig levels in the output that were lower than 0.05 were sig?
Do we only do post hoc when the ANOVA has established differences exist?
I would love to hear your thoughts on this one. Kath
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: ANOVAs, Bonferroni and SPSS

Mike
----- Original Message -----
From: "Bruce Weaver" <[hidden email]>
To: <[hidden email]>
Sent: Thursday, July 14, 2011 7:33 AM
Subject: Re: ANOVAs, Bonferroni and SPSS


> If you are in educational psychology, you must have access to some good
> textbooks that discuss the problem of multiple testing, and the distinction
> between per contrast and family-wise (or experiment-wise) alpha levels.
> When you carry out 20 tests with alpha = .05 per test, the family-wise alpha
> is much higher than .05.  (The exact value depends on the degree to which
> the tests are independent of each other.)  Jerry Dallal's "Little Handbook
> of Statistical Practice" has some nice illustrations of the problems that
> arise when one does not correct for multiple tests.  Look for "A Valuable
> Lesson" and the bullet points under it.
>
>    http://www.tufts.edu/~gdallal/LHSP.HTM

For those of you who have Adobe Acrobat (not Reader, the full program),
one can use Acrobat to do a "web capture" of the website and produce a
PDF of the contents.  This is what you would do:

(1) Open Acrobat.  Click on "Advanced" on the top menu bar, then
click on "Web capture" on the drop-down menu.

(2) After clicking on "Web capture", a side menu will appear.  Click
on the top item "Create PDF from....".  A window will open which
will have a slot to put the URL/web address.  I also click on "Stay
on the same path" and "stay on same server" (to make sure Acrobat
doesn't go other websites).  This will create a PDF for the homepage.

(3)  To get the rest of the pages, repeat "Advance" -> Web Capture
but now on the side menu click on "Append all links on page".  Since
the home page is a "Table of Contents" composed of links to the
relevant webpages, this will add all of the webpages for the "book".
There may be shorter way of doing this but I have used this to
create a PDF of the "Little Handbook".

> For another good illustration, see Craig Bennett's nice poster, which
> reports results of an fMRI study done on a post-mortem Atlantic salmon.
>
>    http://prefrontal.org/files/posters/Bennett-Salmon-2009.pdf

MMmmmmm, salmon.  I use this as a cautionary note in my cognitive
course when reviewing neuroimaging studies.

To get back to the original issue, to calculate the Overall Type I
error rate under the assumption of independence of samples, one
would use this formula:

Overall alpha = 1 - (1 -  per comparison alpha)**K

Where per comparison alpha is typically set to .05 and K= # of groups.

In the case above,
Overall alpha = 1 - (1 - .05)**20 = 1 - (.95)**20 = 1 - .3585 = .6415

There is a 64% probability that one or more of the ANOVAs are
Type I errors but there is no way to know which result(s) are errors.

The one form of bonferroni correction is to set the Overall Type I error
rate to some acceptable value, say, .05 and then divide by K -- in
essence we're trying to find the alpha for the per comparisons. So,

Per comparison alpha = Overall alpha/K = .05/20 = .0025

That is, an ANOVA result is considered statistically significant if the
F ratio has a probability of less than .0025.

Some might consider this to be too stringent/conservative and would use
a higher value for Overall alpha and/or divide the per comparison
alphas differently, that is, having a higher alpha for some critical Fs
and a lower alpha for others (one might want to use a higher alpha
for F-values based on small samples and lower alphas for larger
samples).

In summary, there are a number of issues to contend with in this
type of situation.  One can do what everyone does but this might
produce misleading/incorrect conclusions.  It may be much harder
trying to find out what the right procedure is relative to the easy
procedure.

-Mike Palij
New York University
[hidden email]

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD