Significant difference in "wrong" direction

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Significant difference in "wrong" direction

Allan Lundy, PhD

Dear Listers,
Like most statisticians, if I am predicting a result in a particular direction (mean of Group A larger than that of Group B, for example), I use a 1-tailed significance level.  For example, in SPSS t-test output, if it reports p= .080 in a 2-tailed test, I simply count that as p= .040.  However, from time to time, and again with a current client, I get an embarrassing result:  like the above but in the "wrong" direction; that is, with the Group B mean larger.  What does one do in this case?  Obviously the hypothesis was not supported, but it has always seemed to me that such a result is meaningful -- not only is there is no effect as predicted, but there is evidence for an effect in the opposite direction.  I have generally treated this conservatively and reported this as a kind of super-rejection of the hypothesis.  I figure that if you are using a 1-tailed test, it should apply just as much to harm you as help you.  Of course, in a sense, this violates the presumption behind the concept of the 1-tailed test -- that is, there is not supposed to be any chance of getting a result in the far-left tail of a 1-tailed distribution.  How to handle this must be a profound philosophical question in statistics, but I don't think I have ever seen this addressed in writing.  Anybody know of anything on this?  Any thoughts of your own?
Thanks!
Allan


Allan Lundy, PhD
Research Consulting
[hidden email]

Business & Cell (any time): 215-820-8100
Home (8am-10pm, 7 days/week): 215-885-5313
Address:  108 Cliff Terrace, Wyncote, PA 19095
Visit my Web site at www.dissertationconsulting.net ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

Reply | Threaded
Open this post in threaded view
|

Re: Significant difference in "wrong" direction

Martin Holt
Hi Allan,
 
A good place to start is http://www.bmj.com/cgi/content/full/309/6949/248 which addresses your question, comes from the series of Statistical Notes by Martin Bland and Doug Altman.
 
bw,
Martin Holt


From: "Allan Lundy, PhD" <[hidden email]>
To: [hidden email]
Sent: Wednesday, 19 May, 2010 14:56:15
Subject: Significant difference in "wrong" direction


Dear Listers,
Like most statisticians, if I am predicting a result in a particular direction (mean of Group A larger than that of Group B, for example), I use a 1-tailed significance level.  For example, in SPSS t-test output, if it reports p= .080 in a 2-tailed test, I simply count that as p= .040.  However, from time to time, and again with a current client, I get an embarrassing result:  like the above but in the "wrong" direction; that is, with the Group B mean larger.  What does one do in this case?  Obviously the hypothesis was not supported, but it has always seemed to me that such a result is meaningful -- not only is there is no effect as predicted, but there is evidence for an effect in the opposite direction.  I have generally treated this conservatively and reported this as a kind of super-rejection of the hypothesis.  I figure that if you are using a 1-tailed test, it should apply just as much to harm you as help you.  Of course, in a sense, this violates the presumption behind the concept of the 1-tailed test -- that is, there is not supposed to be any chance of getting a result in the far-left tail of a 1-tailed distribution.  How to handle this must be a profound philosophical question in statistics, but I don't think I have ever seen this addressed in writing.  Anybody know of anything on this?  Any thoughts of your own?
Thanks!
Allan


Allan Lundy, PhD
Research Consulting
[hidden email]

Business & Cell (any time): 215-820-8100
Home (8am-10pm, 7 days/week): 215-885-5313
Address:  108 Cliff Terrace, Wyncote, PA 19095
Visit my Web site at www.dissertationconsulting.net ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

Reply | Threaded
Open this post in threaded view
|

Re: Significant difference in "wrong" direction

Joost van Ginkel
In reply to this post by Allan Lundy, PhD
Dear Allan,
 
Technically, the one-sided p-value you get represents the probability that you find this mean difference A - B or larger. Now if the mean of B is larger than A, you get a negative difference and you want to know the probability that that you find this negative difference or larger. If the results were in the direction you would expect and SPSS would report a p-value of, say, 0.04, the one-sided p-value would be 0.02. However, when the mean difference goes in opposite direction, you have to look at the other side of the distribution. Thus, in that case your p-value would become 1-0.02 = 0.98. This is probably not new to most statisticians.
About the philosophical part: what you should do depends entirely on the context, I think. For example, if one medicine cures people more slowly than a placebo although you expect it to cure people faster, then that would be a reason to say: well, this medicine does more harm than good, so the null-hypothesis should be rejected but not in the way you would have wanted. Thus, say in the discussion that if you had done a two-sided test or a one-sided test in the opposite direction, the result would have been significant and that this implies that this medicine is actually harmful. On the other hand, if you suspect a soda manufacturer of fraude, for example, by putting less soda in each bottle on average than what the bottle says, and now it turns out that there is actually more in each bottle than what the bottle says, the manufacturer doesn't have to be sued. So stick to your original one-sided alternative hypothesis that the manufacturer puts less soda in the bottle. To summarize: I would look at the context.
 
Best regards,
 
Joost van Ginkel
 

Joost R. van Ginkel, PhD
Leiden University
Faculty of Social and Behavioural Sciences
PO Box 9555
2300 RB Leiden
The Netherlands
Tel: +31-(0)71-527 3620
Fax: +31-(0)71-527 1721

 


From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Allan Lundy, PhD
Sent: 19 May 2010 15:56
To: [hidden email]
Subject: Significant difference in "wrong" direction


Dear Listers,
Like most statisticians, if I am predicting a result in a particular direction (mean of Group A larger than that of Group B, for example), I use a 1-tailed significance level.  For example, in SPSS t-test output, if it reports p= .080 in a 2-tailed test, I simply count that as p= .040.  However, from time to time, and again with a current client, I get an embarrassing result:  like the above but in the "wrong" direction; that is, with the Group B mean larger.  What does one do in this case?  Obviously the hypothesis was not supported, but it has always seemed to me that such a result is meaningful -- not only is there is no effect as predicted, but there is evidence for an effect in the opposite direction.  I have generally treated this conservatively and reported this as a kind of super-rejection of the hypothesis.  I figure that if you are using a 1-tailed test, it should apply just as much to harm you as help you.  Of course, in a sense, this violates the presumption behind the concept of the 1-tailed test -- that is, there is not supposed to be any chance of getting a result in the far-left tail of a 1-tailed distribution.  How to handle this must be a profound philosophical question in statistics, but I don't think I have ever seen this addressed in writing.  Anybody know of anything on this?  Any thoughts of your own?
Thanks!
Allan


Allan Lundy, PhD
Research Consulting
[hidden email]

Business & Cell (any time): 215-820-8100
Home (8am-10pm, 7 days/week): 215-885-5313
Address:  108 Cliff Terrace, Wyncote, PA 19095
Visit my Web site at www.dissertationconsulting.net ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

**********************************************************************

This email and any files transmitted with it are confidential and

intended solely for the use of the individual or entity to whom they

are addressed. If you have received this email in error please notify

the system manager.

**********************************************************************

 

Reply | Threaded
Open this post in threaded view
|

Re: Significant difference in "wrong" direction

mpirritano
In reply to this post by Allan Lundy, PhD

Allan,

 

This has certainly been addressed in the literature. Kaiser (1960), who is cited by Richard Harris.  Google Richard Harris, my former multivariate stats professor at UNM, and a very interesting cat. Here’s what you do. Directional testing. Set the tail that you really believe is the ‘wrong’ direction at something small, like p < .01, and the other tail at p < .04. The division is arbitrary. That way you don’t miss something that IS very interesting if it happens in the opposite direction that you hypothesized. Because if you miss it, some would say that you MISSED it. You can’t go back and change your assumptions after the fact.

 

Thanks

Matt

 

Matthew Pirritano, Ph.D.

Research Analyst IV

Medical Services Initiative (MSI)

Orange County Health Care Agency

(714) 568-5648


From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Allan Lundy, PhD
Sent: Wednesday, May 19, 2010 6:56 AM
To: [hidden email]
Subject: Significant difference in "wrong" direction

 


Dear Listers,
Like most statisticians, if I am predicting a result in a particular direction (mean of Group A larger than that of Group B, for example), I use a 1-tailed significance level.  For example, in SPSS t-test output, if it reports p= .080 in a 2-tailed test, I simply count that as p= .040.  However, from time to time, and again with a current client, I get an embarrassing result:  like the above but in the "wrong" direction; that is, with the Group B mean larger.  What does one do in this case?  Obviously the hypothesis was not supported, but it has always seemed to me that such a result is meaningful -- not only is there is no effect as predicted, but there is evidence for an effect in the opposite direction.  I have generally treated this conservatively and reported this as a kind of super-rejection of the hypothesis.  I figure that if you are using a 1-tailed test, it should apply just as much to harm you as help you.  Of course, in a sense, this violates the presumption behind the concept of the 1-tailed test -- that is, there is not supposed to be any chance of getting a result in the far-left tail of a 1-tailed distribution.  How to handle this must be a profound philosophical question in statistics, but I don't think I have ever seen this addressed in writing.  Anybody know of anything on this?  Any thoughts of your own?
Thanks!
Allan



Allan Lundy, PhD
Research Consulting
[hidden email]

Business & Cell (any time): 215-820-8100
Home (8am-10pm, 7 days/week): 215-885-5313
Address:  108 Cliff Terrace, Wyncote, PA 19095
Visit my Web site at www.dissertationconsulting.net

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Significant difference in "wrong" direction

Bruce Weaver
Administrator
In reply to this post by Martin Holt
And with regard to the original post, here is the key point from that article:

"In general a one sided test is appropriate when a large difference in one direction would lead to the same action as no difference at all. Expectation of a difference in a particular direction is not adequate justification."  (emphasis added)



Martin Holt wrote
Hi Allan,

A good place to start is http://www.bmj.com/cgi/content/full/309/6949/248 which addresses your question, comes from the series of Statistical Notes by Martin Bland and Doug Altman.

bw,
Martin Holt




________________________________
From: "Allan Lundy, PhD" <Allan.Lundy@comcast.net>
To: SPSSX-L@LISTSERV.UGA.EDU
Sent: Wednesday, 19 May, 2010 14:56:15
Subject: Significant difference in "wrong" direction


Dear Listers,
Like most statisticians, if I am predicting a result in a particular direction (mean of Group A larger than that of Group B, for example), I use a 1-tailed significance level.  For example, in SPSS t-test output, if it reports p= .080 in a 2-tailed test, I simply count that as p= .040.  However, from time to time, and again with a current client, I get an embarrassing result:  like the above but in the "wrong" direction; that is, with the Group B mean larger.  What does one do in this case?  Obviously the hypothesis was not supported, but it has always seemed to me that such a result is meaningful -- not only is there is no effect as predicted, but there is evidence for an effect in the opposite direction.  I have generally treated this conservatively and reported this as a kind of super-rejection of the hypothesis.  I figure that if you are using a 1-tailed test, it should apply just as much to harm you as help you.  Of course, in a sense, this
 violates the presumption behind the concept of the 1-tailed test -- that is, there is not supposed to be any chance of getting a result in the far-left tail of a 1-tailed distribution.  How to handle this must be a profound philosophical question in statistics, but I don't think I have ever seen this addressed in writing.  Anybody know of anything on this?  Any thoughts of your own?
Thanks!
Allan



Allan Lundy, PhD
Research Consulting
Allan.Lundy@comcast.net

Business & Cell (any time): 215-820-8100
Home (8am-10pm, 7 days/week): 215-885-5313
Address:  108 Cliff Terrace, Wyncote, PA 19095
Visit my Web site at www.dissertationconsulting.net ===================== To manage your subscription to SPSSX-L, send a message to LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Significant difference in "wrong" direction

Art Kendall
In reply to this post by Joost van Ginkel
To elaborate on this good advice.
The null hypothesis is that one should stay with the {prevailing, status quo, default} decision about a {theory, practice, policy}. In an Anglo-American justice analog, the defendant null hypothesis  is presumed innocent {true, useful} unless there is sufficient evidence otherwise. In some other justice systems failure to reach the criterion would mean the charge is "not proven".

The number of tails determines what is sufficient evidence.

Remember statistical "significance" only tells us the a {difference, relation} is statistically distinguishable from randomness.   It is necessary but not sufficient for a decsion to go with the alternative {theory, practice, policy}.

Art Kendall
Social Research Consultants

On 5/19/2010 10:32 AM, Ginkel, Joost van wrote:
Dear Allan,
 
Technically, the one-sided p-value you get represents the probability that you find this mean difference A - B or larger. Now if the mean of B is larger than A, you get a negative difference and you want to know the probability that that you find this negative difference or larger. If the results were in the direction you would expect and SPSS would report a p-value of, say, 0.04, the one-sided p-value would be 0.02. However, when the mean difference goes in opposite direction, you have to look at the other side of the distribution. Thus, in that case your p-value would become 1-0.02 = 0.98. This is probably not new to most statisticians.
About the philosophical part: what you should do depends entirely on the context, I think. For example, if one medicine cures people more slowly than a placebo although you expect it to cure people faster, then that would be a reason to say: well, this medicine does more harm than good, so the null-hypothesis should be rejected but not in the way you would have wanted. Thus, say in the discussion that if you had done a two-sided test or a one-sided test in the opposite direction, the result would have been significant and that this implies that this medicine is actually harmful. On the other hand, if you suspect a soda manufacturer of fraude, for example, by putting less soda in each bottle on average than what the bottle says, and now it turns out that there is actually more in each bottle than what the bottle says, the manufacturer doesn't have to be sued. So stick to your original one-sided alternative hypothesis that the manufacturer puts less soda in the bottle. To summarize: I would look at the context.
 
Best regards,
 
Joost van Ginkel
 

Joost R. van Ginkel, PhD
Leiden University
Faculty of Social and Behavioural Sciences
PO Box 9555
2300 RB Leiden
The Netherlands
Tel: +31-(0)71-527 3620
Fax: +31-(0)71-527 1721

 


From: SPSSX(r) Discussion [[hidden email]] On Behalf Of Allan Lundy, PhD
Sent: 19 May 2010 15:56
To: [hidden email]
Subject: Significant difference in "wrong" direction


Dear Listers,
Like most statisticians, if I am predicting a result in a particular direction (mean of Group A larger than that of Group B, for example), I use a 1-tailed significance level.  For example, in SPSS t-test output, if it reports p= .080 in a 2-tailed test, I simply count that as p= .040.  However, from time to time, and again with a current client, I get an embarrassing result:  like the above but in the "wrong" direction; that is, with the Group B mean larger.  What does one do in this case?  Obviously the hypothesis was not supported, but it has always seemed to me that such a result is meaningful -- not only is there is no effect as predicted, but there is evidence for an effect in the opposite direction.  I have generally treated this conservatively and reported this as a kind of super-rejection of the hypothesis.  I figure that if you are using a 1-tailed test, it should apply just as much to harm you as help you.  Of course, in a sense, this violates the presumption behind the concept of the 1-tailed test -- that is, there is not supposed to be any chance of getting a result in the far-left tail of a 1-tailed distribution.  How to handle this must be a profound philosophical question in statistics, but I don't think I have ever seen this addressed in writing.  Anybody know of anything on this?  Any thoughts of your own?
Thanks!
Allan


Allan Lundy, PhD
Research Consulting
[hidden email]

Business & Cell (any time): 215-820-8100
Home (8am-10pm, 7 days/week): 215-885-5313
Address:  108 Cliff Terrace, Wyncote, PA 19095
Visit my Web site at www.dissertationconsulting.net ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

**********************************************************************

This email and any files transmitted with it are confidential and

intended solely for the use of the individual or entity to whom they

are addressed. If you have received this email in error please notify

the system manager.

**********************************************************************

 

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Art Kendall
Social Research Consultants
Reply | Threaded
Open this post in threaded view
|

Re: Significant difference in "wrong" direction

Bruce Weaver
Administrator
In reply to this post by mpirritano
Pirritano, Matthew-2 wrote
Allan,

This has certainly been addressed in the literature. Kaiser (1960), who
is cited by Richard Harris.  Google Richard Harris, my former
multivariate stats professor at UNM, and a very interesting cat. Here's
what you do. Directional testing. Set the tail that you really believe
is the 'wrong' direction at something small, like p < .01, and the other
tail at p < .04. The division is arbitrary. That way you don't miss
something that IS very interesting if it happens in the opposite
direction that you hypothesized. Because if you miss it, some would say
that you MISSED it. You can't go back and change your assumptions after
the fact.

Thanks

Matt
Hi Matt.  I've seen that suggestion before, and on the face of it, it seems like a great way to go.  But one criticism is that you don't have any way of knowing whether the .05 was really carved up the way the author reports PRIOR to the results being seen.  With an ordinary non-directional test (.025 in each tail), this is not an issue.

Cheers,
Bruce
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).