Login  Register

Re: p-value / type I error

Posted by drfg2008 on Apr 13, 2011; 4:06pm
URL: http://spssx-discussion.165.s1.nabble.com/p-value-type-I-error-tp4296382p4300978.html

The problem seems to be that testing hypothesis is based on a hybrid model, consisting of two different models: the Fisher concept and the Neyman-Pearson concept.

Fisher followed a complete different approach than Neyman and Pearson. Fisher invented the term: "significance" and the term "p-value". [1] Thus, according to Fisher, a significance test is defined as a procedure for establishing the probability of an outcome (!), as well as more extreme ones, on a null hypothesis of no effect or relationship. Whereas Neyman and Pearson invented the alpha and beta concept. Raymond Hubbard and M.J. Bayarri call it "The distinction between evidence (p’s) and error (α’s)".

My question is, how do these concepts fit together, if at all. Hubbard and Bayarri suggest: "This is achieved by reporting conditional (on p-values) error probabilities." (3.2 Reconciling Fisher’s and Neyman–Pearson’s Methods of Statistical Testing)

My question: Has anyone an idea what is meant by that (and is capable of explaning it in a way that everyone can understand)?


Frank

[1] http://ftp.isds.duke.edu/WorkingPapers/03-26.pdf

(I thank Generalist for the link)
Dr. Frank Gaeth