|
My gut says no but still the question: Is there a way to run Fisher's Exact
test on a multiple response variable? Tks WMB Statistical Services ============ mailto: [hidden email] http:\\home.earthlink.net\~info.statman ============ ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Will
Statistical Services ============ info.statman@earthlink.net http://home.earthlink.net/~z_statman/ ============ |
|
Hi
If the contingency table has 2 levels in one of the dimensions (2xK table), you can get an exact p-value (Fisher's or Mid-p) using the freeware application Exact2xK.exe, from the package PEPI 4.0. You will have to manually type the cell frequencies into the program. Since Sagebrushpress (the owners of PEPI) page closed, getting PEPI 4.0x is not very easy. If you are interested in the package and can't find it, send me an off-list messge and I can email the zipped package to you. HTH, Marta GG Statmanz wrote: > My gut says no but still the question: Is there a way to run Fisher's Exact > test on a multiple response variable? > > Tks > WMB > Statistical Services > > ============ > mailto: [hidden email] > http:\\home.earthlink.net\~info.statman > ============ > > ===================== > To manage your subscription to SPSSX-L, send a message to > [hidden email] (not to SPSSX-L), with no body text except the > command. To leave the list, send the command > SIGNOFF SPSSX-L > For a list of commands to manage subscriptions, send the command > INFO REFCARD > > -- For miscellaneous SPSS related statistical stuff, visit: http://gjyp.nl/marta/ ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
Administrator
|
In reply to this post by zstatman
Just in case folks are not aware of it, here is a paper that argues quite persuasively that we ought to be using the 'N-1' chi-square rather than Fisher's exact test. Campbell, I. Chi-squared and Fisher-Irwin tests of two-by-two tables with small sample recommendations. Statist. Med. 2007; 26:3661-3675). See also www.iancampbell.co.uk/twobytwo/background.htm. Here are a couple ways to compute it in SPSS: http://sites.google.com/a/lakeheadu.ca/bweaver/Home/statistics/files/N_minus_1_chisquare.txt?attredirects=0 http://sites.google.com/a/lakeheadu.ca/bweaver/Home/statistics/files/N_minus_1_chisquare_v2.txt?attredirects=0
--
Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." PLEASE NOTE THE FOLLOWING: 1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. 2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/). |
|
Bruce Weaver referenced a useful paper, 22-10-09 2.11pm, saying,
> Just in case folks are not aware of it, here is a paper that argues quite > persuasively that we ought to be using the 'N-1' chi-square rather than > Fisher's exact test. > > Campbell, I. Chi-squared and Fisher-Irwin tests of two-by-two tables with > small sample recommendations. Statist. Med. 2007; 26:3661-3675). See also > www.iancampbell.co.uk/twobytwo/background.htm. > Reading that paper, there are two extracts that are relevant: In the summary: "The K. Pearson and 'N - 1' chi squared tests cannot be used at low sample sizes without restriction because they have Type I error rates considerably above the nominal, across certain ranges of the unknown population proportion(s). In these circumstances, (and also, some would say, for theoretical reasons detailed in the discussion sections), the Yates chi squared test and the Fisher-Irwin test are generally substituted, resulting in low Type I error rates, and inevitable loss of power. Restrictions on when the K. Pearson chi squared test can be used date back over 50 years to Cochran and earlier. Cochran noted that the restrictions were arbitrary and provisional, giving rise to a clear need for further research on the performance of variants of the chi squared test under various restrictions." Under "Comparison of Tests by TypeI Error: Findings: "Hirji et al. (1991) were the first to study mid-P versions of the Fisher-Irwin test in comparative trials. They found that the mid-P method doubling the one-sided probability performs better than standard versions of the Fisher-Irwin test, and the mid-P method by Irwin's rule performs better still in having actual Type I levels closest to nominal levels, although the median level over the range of cases studied was still below nominal levels. Because of the absence of high actual levels, these authors recommended use of the mid-P method by Irwin's rule in preference to the 'N - 1' chi squared test. Hwang and Yang (2001) also judged the mid-P method by Irwin's rule to perform quite reasonably." So, while I agree with Bruce in general, these two extracts show that there are times when the Fisher-Irwin test is preferable to the N-1 Chi-Squared test, and that the mid-P method for Fisher-Irwin might perform better than the standard method or the N-1 ChiSquared method. Overall, the paper illustrates the minefield of possibilities dating back over 50 years. Best Wishes, Martin Holt ----- Original Message ----- From: "Bruce Weaver" <[hidden email]> To: <[hidden email]> Sent: Thursday, October 22, 2009 2:11 PM Subject: Re: Fisher's Exact Test > Statmanz wrote: >> >> My gut says no but still the question: Is there a way to run Fisher's >> Exact >> test on a multiple response variable? >> >> Tks >> WMB >> Statistical Services >> >> > > Just in case folks are not aware of it, here is a paper that argues quite > persuasively that we ought to be using the 'N-1' chi-square rather than > Fisher's exact test. > > Campbell, I. Chi-squared and Fisher-Irwin tests of two-by-two tables with > small sample recommendations. Statist. Med. 2007; 26:3661-3675). See also > www.iancampbell.co.uk/twobytwo/background.htm. > > Here are a couple ways to compute it in SPSS: > > http://sites.google.com/a/lakeheadu.ca/bweaver/Home/statistics/files/N_minus_1_chisquare.txt?attredirects=0 > http://sites.google.com/a/lakeheadu.ca/bweaver/Home/statistics/files/N_minus_1_chisquare_v2.txt?attredirects=0 > > > > ----- > -- > Bruce Weaver > [hidden email] > http://sites.google.com/a/lakeheadu.ca/bweaver/ > "When all else fails, RTFM." > > NOTE: My Hotmail account is not monitored regularly. > To send me an e-mail, please use the address shown above. > -- > View this message in context: > http://www.nabble.com/Fisher%27s-Exact-Test-tp25997237p26009631.html > Sent from the SPSSX Discussion mailing list archive at Nabble.com. > > ===================== > To manage your subscription to SPSSX-L, send a message to > [hidden email] (not to SPSSX-L), with no body text except the > command. To leave the list, send the command > SIGNOFF SPSSX-L > For a list of commands to manage subscriptions, send the command > INFO REFCARD ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
Administrator
|
I aplogize for coming back to this so belatedly, but I've just been looking at Campbell's paper again. Martin is quite right in pointing out that there are limitations on when the N-1 chi-square can be used--specifically, all expected counts must be equal to or greater than 1. Here is the advice Campbell gives to conclude the article: --- start of excerpt --- The data and arguments presented here provide a compelling body of evidence that the best policy in the analysis of 2x2 tables from either comparative trials or cross-sectional studies is: (1) Where all expected numbers are at least 1, analyse by the 'N - 1' chi-squared test (the K. Pearson chi-squared test but with N replaced by N - 1). (2) Otherwise, analyse by the Fisher–Irwin test, with two-sided tests carried out by Irwin’s rule (taking tables from either tail as likely, or less, as that observed). This policy extends the use of the chi-squared test to smaller samples (where the current practice is to use the Fisher–Irwin test), with a resultant increase in the power to detect real differences. --- end of excerpt --- Notice that Campbell recommends the two-sided Fisher-Irwin test by Irwin's rule, not one of the the mid-P versions. AFAIK, this is the two-sided Fisher exact test that SPSS calculates. QUESTION FOR SPSS: Can you add the N-1 chi-square to the CROSSTABS output for 2x2 tables? Cheers, Bruce
--
Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." PLEASE NOTE THE FOLLOWING: 1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. 2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/). |
|
You're right, Bruce...thanks for putting over a clear summary, when the
paper does illustrate honestly the variety of chi-squared and exact tests, and their performances ! Best Regards, Martin ----- Original Message ----- From: "Bruce Weaver" <[hidden email]> To: <[hidden email]> Sent: Thursday, December 17, 2009 8:04 PM Subject: Re: Fisher's Exact Test > Martin Holt wrote: >> >> Bruce Weaver referenced a useful paper, 22-10-09 2.11pm, saying, >> >>> Just in case folks are not aware of it, here is a paper that argues >>> quite >>> persuasively that we ought to be using the 'N-1' chi-square rather than >>> Fisher's exact test. >>> >>> Campbell, I. Chi-squared and Fisher-Irwin tests of two-by-two tables >>> with >>> small sample recommendations. Statist. Med. 2007; 26:3661-3675). See >>> also >>> www.iancampbell.co.uk/twobytwo/background.htm. >>> >> >> Reading that paper, there are two extracts that are relevant: >> >> In the summary: >> "The K. Pearson and 'N - 1' chi squared tests cannot be used at low >> sample >> sizes without restriction because they have Type I error rates >> considerably >> above the nominal, across certain ranges of the unknown population >> proportion(s). In these circumstances, (and also, some would say, for >> theoretical reasons detailed in the discussion sections), the Yates chi >> squared test and the Fisher-Irwin test are generally substituted, >> resulting >> in low Type I error rates, and inevitable loss of power. Restrictions on >> when the K. Pearson chi squared test can be used date back over 50 years >> to >> Cochran and earlier. Cochran noted that the restrictions were arbitrary >> and >> provisional, giving rise to a clear need for further research on the >> performance of variants of the chi squared test under various >> restrictions." >> >> Under "Comparison of Tests by TypeI Error: Findings: >> >> "Hirji et al. (1991) were the first to study mid-P versions of the >> Fisher-Irwin test in comparative trials. They found that the mid-P method >> doubling the one-sided probability performs better than standard versions >> of >> the Fisher-Irwin test, and the mid-P method by Irwin's rule performs >> better >> still in having actual Type I levels closest to nominal levels, although >> the >> median level over the range of cases studied was still below nominal >> levels. >> Because of the absence of high actual levels, these authors recommended >> use >> of the mid-P method by Irwin's rule in preference to the 'N - 1' chi >> squared >> test. Hwang and Yang (2001) also judged the mid-P method by Irwin's rule >> to >> perform quite reasonably." >> >> So, while I agree with Bruce in general, these two extracts show that >> there >> are times when the Fisher-Irwin test is preferable to the N-1 Chi-Squared >> test, and that the mid-P method for Fisher-Irwin might perform better >> than >> the standard method or the N-1 ChiSquared method. >> >> Overall, the paper illustrates the minefield of possibilities dating back >> over 50 years. >> >> Best Wishes, >> >> Martin Holt >> >> > > I aplogize for coming back to this so belatedly, but I've just been > looking > at Campbell's paper again. Martin is quite right in pointing out that > there > are limitations on when the N-1 chi-square can be used--specifically, all > expected counts must be equal to or greater than 1. Here is the advice > Campbell gives to conclude the article: > > --- start of excerpt --- > > The data and arguments presented here provide a compelling body of > evidence > that the best policy in the analysis of 2x2 tables from either comparative > trials or cross-sectional studies is: > > (1) Where all expected numbers are at least 1, analyse by the 'N - 1' > chi-squared test (the > K. Pearson chi-squared test but with N replaced by N - 1). > > (2) Otherwise, analyse by the Fisher–Irwin test, with two-sided tests > carried out by Irwin’s rule > (taking tables from either tail as likely, or less, as that observed). > > This policy extends the use of the chi-squared test to smaller samples > (where the current practice > is to use the Fisher–Irwin test), with a resultant increase in the power > to > detect real differences. > > --- end of excerpt --- > > Notice that Campbell recommends the two-sided Fisher-Irwin test by Irwin's > rule, not one of the the mid-P versions. AFAIK, this is the two-sided > Fisher exact test that SPSS calculates. > > QUESTION FOR SPSS: Can you add the N-1 chi-square to the CROSSTABS output > for 2x2 tables? > > Cheers, > Bruce > > > ----- > -- > Bruce Weaver > [hidden email] > http://sites.google.com/a/lakeheadu.ca/bweaver/ > "When all else fails, RTFM." > > NOTE: My Hotmail account is not monitored regularly. > To send me an e-mail, please use the address shown above. > -- > View this message in context: > http://old.nabble.com/Fisher%27s-Exact-Test-tp25997237p26834119.html > Sent from the SPSSX Discussion mailing list archive at Nabble.com. > > ===================== > To manage your subscription to SPSSX-L, send a message to > [hidden email] (not to SPSSX-L), with no body text except the > command. To leave the list, send the command > SIGNOFF SPSSX-L > For a list of commands to manage subscriptions, send the command > INFO REFCARD > ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
|
In reply to this post by Bruce Weaver
Just curious Diana On 17/12/2009 20:04, "Bruce Weaver" <bruce.weaver@...> wrote: www.iancampbell.co.uk/twobytwo/background.htm Professor Diana Kornbrot email: d.e.kornbrot@... web: http://web.me.com/kornbrot/KornbrotHome.html Work School of Psychology University of Hertfordshire College Lane, Hatfield, Hertfordshire AL10 9AB, UK voice: +44 (0) 170 728 4626 fax: +44 (0) 170 728 5073 Home 19 Elmhurst Avenue London N2 0LT, UK voice: +44 (0) 208 883 3657 mobile: +44 (0) 796 890 2102 fax: +44 (0) 870 706 4997 |
|
Hi Diana,
As well as talking about how this discussion has lasted
over one hundred years, there are quotes such as: "Other versions of the chi squared and Fisher-Irwin tests
have been proposed, and there are also a considerable number of alternative
tests - see Upton (1982), who discussed a total of 23 different
tests........"
Is the log likelihood chi-squared particularly
outstanding, such that it should have been studied along with the 3 methods used
? (The paper is 2007, so not out of date....unless things have moved
on)
Best Wishes,
Martin Holt
|
|
Administrator
|
In reply to this post by Kornbrot, Diana
Hi Diana. Agresti at least has argued that the likelihood-ratio chi-square doesn't perform as well as Pearson's chi-square when expected counts get low. The following is from his book, Categorical Data Analysis. --- start of excerpt --- It is not simple to describe the sample size needed for the chi-squared distribution to approximate well the exact distributions of X^2 and G^2 [also called L^2 by some authors]. For a fixed number of cells, X^2 usually converges more quickly than G^2. The chi-squared approximation is usually poor for G^2 when n/IJ < 5 [where n = the grand total and IJ = rc = the number of cells in the table]. When I or J [i.e., r or c] is large, it can be decent for X^2 for n/IJ as small as 1, if the table does not contain both very small and moderately large expected frequencies. (Agresti, 1990, p. 49) --- end of excerpt --- HTH.
--
Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." PLEASE NOTE THE FOLLOWING: 1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. 2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/). |
|
In reply to this post by Martin Holt
Pity not all its recommendations are in PASW cross-tabs! www.iancampbell.co.uk/twobytwo/background.htm <http://www.iancampbell.co.uk/twobytwo/background.htm> Implications & Message for PASW/IBM Give N –1 Pearson and maximum likelihood chi-squares Drop the Yates, there appear to be no situations in which it is preferred Its good to note that the Fisher test given by PASW is indeed the Irwin version recommended by Campbell The Pearson chi-square uses N, rather than N-1. Martin Holt comments: As well as talking about how this discussion has lasted over one hundred years, there are quotes such as: "Other versions of the chi squared and Fisher-Irwin tests have been proposed, and there are also a considerable number of alternative tests - see Upton (1982), who discussed a total of 23 different tests........" Is the log likelihood chi-squared particularly outstanding, such that it should have been studied along with the 3 methods used ? (The paper is 2007, so not out of date....unless things have moved on) Why suggest likelihood ratio tests? Not ’just’ one of 23 screwball suggestions Campbell already notes that another approach is to put confidence limits round a comparison of the conditional probability of ‘success’ given categroy 1, p1, with the conditional probability of ‘success’ given categoryy 2, p2. Obviously, independece is equivalent to p1=p2. But how to measure divergence from equality? Likelihood ratio approaches test H0: p1/p2=1, or equivalently, ln(p1/p2)= 0. With this formulation confidence limits can never lie outside, -1, 1. Pearson chi-square approach tests p1-p2=0. With p1 or p2 near 0 or 1 one can get CLs outside –1, 1. Furthermore, the ratio approach may make more intuitive sense when comparing magnitudes of deviation form independence, aka, as strength of effects. Thus a difference of probabilities of 0.1 from .55 to .65 is a relative increase of 17% [also equal to the ln odds ratio], while an increase from .85 to .95 is a relative increase of only 11%. Conversely: an relative increase of 20% means a shift of .18, from .80 to .98, round a probability of .95, while a difference of only .12, from .54 to .66, round a probability of .60. Its tougher to move from 90% to 95% thanfrom 50% to 55% as any student or airline operator can tell. I prefer my contingency coefficient measure of strength to reflect this, and hence use the LL chi-square to derive contingency coefficent N-1 or N Meanwhile the theoretical reason for using N-1 rather than N, namely that the N-1 equation uses an unbiassed estimate of p(1-p) apply equally to the likelihood chi-square. I tend to follow Agresti, who seems to me admirably clear on these issues [and many others] Alan, A. (2003). Frontmatter Categorical Data Analysis (Second Edition) (pp. i-xv)http://dx.doi.org/10.1002/0471249688.fmatter Agresti, A. (2002). Categorical data analysis (2 ed.). Agresti, A., & Hartzel, J. (2000). Tutorial in biostatistics: strategies for comparing treatments on a binary response with mulit-centre data. Statistics in Medicine, 19, 1115-1139. Best Diana On 19/12/2009 13:24, "Martin Holt" <m861holt@...> wrote: Hi Diana, Professor Diana Kornbrot email: d.e.kornbrot@... web: http://web.me.com/kornbrot/KornbrotHome.html Work School of Psychology University of Hertfordshire College Lane, Hatfield, Hertfordshire AL10 9AB, UK voice: +44 (0) 170 728 4626 fax: +44 (0) 170 728 5073 Home 19 Elmhurst Avenue London N2 0LT, UK voice: +44 (0) 208 883 3657 mobile: +44 (0) 796 890 2102 fax: +44 (0) 870 706 4997 |
| Free forum by Nabble | Edit this page |
