Re: p = .000? [SEC=UNCLASSIFIED]

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: p = .000? [SEC=UNCLASSIFIED]

Gosse, Michelle

There is,  of course, the obvious issue that p-values are strongly affected by sample size, so if you have a very large sample, say tens of thousands, miniscule p-values occur frequently. So I follow Matthew in looking at “does this show a significant practical impact”, e.g. sufficient ROI, sufficient clinical impact? Significant p-values can be next to useless as an indicator with large sample sizes.

 

Cheers

Michelle

 

 

 

UNCLASSIFIED

From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Poes, Matthew Joseph
Sent: Thursday, January 12, 2012 3:38 AM
To: [hidden email]
Subject: Re: p = .000?

 

Ok first, I believe that there is nothing you can really do but to report it as you get it, or go against the APA recommendation, and report it as P<.001.

 

Second, let me say that I think the APA’s decision to require manuscripts reporting exact P values was a very dumb decision.  We should be going away from the attention to the exact P value, and more towards assessing significance of findings in terms of relative effect size.  In my opinion, the decision to require exact P values is just going to perpetuate misinterpretation of results as being “highly significant.”  Their decision was completely in contrast to the decisions made by other groups such as the What Works Clearinghouse in education.

 

Also note, that while sometimes you can double click on a value and get an exact value, I’ve tried this with P values, doesn’t appear to work for me with version 19, I just see a “0”.

 

Matthew J Poes

Research Data Specialist

Center for Prevention Research and Development

University of Illinois

 

From: SPSSX(r) Discussion [hidden email] On Behalf Of SR Millis
Sent: Wednesday, January 11, 2012 8:26 AM
To: [hidden email]
Subject: p = .000?

 

When SPSS output reports a p-value as .000, how should you report the value?

 

SR Millis

 

*************************************************************************************
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.

Scanned by Clearswift SECURE Email Gateway at Food Standards ANZ.
*************************************************************************************

Reply | Threaded
Open this post in threaded view
|

Re: p = .000? [SEC=UNCLASSIFIED]

Rich Ulrich
Here's some defense of the APA recommendation for exact p-values.

Michelle - Very few APA articles have samples of tens of thousands, so
what you say is largely irrelevant to them, even if that were a fixed rule.
Which I doubt.  (On the other hand, large surveys too often fail to receive
criticism for using simple Within-variance as error.  Proper errors would
produce larger and more sensible p's.  But that is another issue.)

Matthew -
1. Providing p's provides redundancy, so that the reader can confirm the
report by the consistency of N, effect size, and p.  Hopefully, real errors
are caught the the reviewers instead of reaching the audience.

2. In the absence of any reports of effect size, or reports that are not
clear and simple, the p-value allows the reader to compute effect-size
despite the oversight.

3. One of the continuing problems of data presentation is the existence
of multiplicity - multiple variables, multiple hypotheses, multiple tests.
"Correction" for multiplicity is not always done -- or, not done as the
reader would prefer.  Exact p values allow the reader better access
to reconstructing inferences.

4. The author may be inhibited (by reviewers) from screwing up the
relative importance of effects.  The p-value contrast of "barely 0.05"
vs "0.002"  is not always *recognized* by researchers who have
not been at it long, and are still in love with "rejected at 5%" as their
only rule.  But the contrast will stand out, for the informed reader.
In my consultations, I have (indeed) pointed out, "This one is big. 
See, p= .001?  It has a 95% or so chance of replication (as at least
5%) if the study were repeated.  That one is barely 5%, which means
that it has only a 50% chance of replication.  Talk about his one."

--
Rich Ulrich



Date: Thu, 12 Jan 2012 05:57:17 +1100
From: [hidden email]
Subject: Re: p = .000? [SEC=UNCLASSIFIED]
To: [hidden email]

There is,  of course, the obvious issue that p-values are strongly affected by sample size, so if you have a very large sample, say tens of thousands, miniscule p-values occur frequently. So I follow Matthew in looking at “does this show a significant practical impact”, e.g. sufficient ROI, sufficient clinical impact? Significant p-values can be next to useless as an indicator with large sample sizes.

 

Cheers

Michelle

 

 

 

UNCLASSIFIED

From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Poes, Matthew Joseph
Sent: Thursday, January 12, 2012 3:38 AM
To: [hidden email]
Subject: Re: p = .000?

 

Ok first, I believe that there is nothing you can really do but to report it as you get it, or go against the APA recommendation, and report it as P<.001.

 

Second, let me say that I think the APA’s decision to require manuscripts reporting exact P values was a very dumb decision.  We should be going away from the attention to the exact P value, and more towards assessing significance of findings in terms of relative effect size.  In my opinion, the decision to require exact P values is just going to perpetuate misinterpretation of results as being “highly significant.”  Their decision was completely in contrast to the decisions made by other groups such as the What Works Clearinghouse in education.

 

Also note, that while sometimes you can double click on a value and get an exact value, I’ve tried this with P values, doesn’t appear to work for me with version 19, I just see a “0”.

 

Matthew J Poes

[snip]