|
Dear list: I have just completed a logistic regression and have a p value for an OR equal to .05, yet the confidence interval includes 1. Since I made a prediction is it legit to divide the .05 by 2 and thus the p value would become .025. thanks, martin sherman
|
|
Hi Martin
> Dear list: I have just completed a logistic regression and have a p value for an OR equal to .05, yet the confidence interval includes 1. Since I made a prediction is it legit to divide the .05 by 2 and thus the p value would become .025. thanks, martin sherman > I have a certain feeling of "déjà vu". I think I replied to a similar question (was it yours?), but focused on linear instead of logistic regression, more or less a year ago. I have been trying to locate my old message, but since I have a new computer and a different email program it was impossible. Basically: It isn't legitimate to halve the p-value, even if you had made a prediction about the direction of the result. Besides, it isn't legitimate to halve a non significant p-value to make it significant as an afterthought (you should have taken that decision BEFORE collecting your data). What would your conclusions be if the OR had been significant, but the direction of the effect had been the opposite? Would you have discarded it as a non significant result or would have halved the p value too? This paragraph comes from a very good paper on the topic (available at http://bmj.com/cgi/content/full/309/6949/248 ): "In general a one sided p value is appropriate when a large difference in one direction would lead to the same action as no difference at all. Expectation of a difference in a particular direction is not adequate justification [...]. For example, Galloe et al found that oral magnesium significantly increased the risk of cardiac events, rather than decreasing it as they had hoped. If a new treatment kills a lot of patients we should not simply abandon it; we should ask why this happened. Two sided tests should be used unless there is a very good reason for doing otherwise [...]. One sided tests should never be used simply as a device to make a conventionally non-significant difference significant." Alternative approaches: 1) You don't mention the study sample size. Did you make an estimate of the sample size needed before collecting your data? (I have no faith in post-hoc power analysis, don't do it now). Increasing your sample size should solve the problem (provided your OR is reliable). 2) Is your OR univariate, or adjusted for potential confounding factors? You might have overlooked a very important confounding factor in your study, and therefore the OR is biased or obscured by other factors. Have you considered the possibility of effect modifiers? Your nearly significant OR might be showing and averaged effect that should be split in two: lack of effect in one subgroup and very intense effect in other group...On the other hand, if the OR is adjusted, you might have an overparametrized model, adjusted not only for confounding factors, but for intermediate variables (an intermediate step in the chain from exposition to outcome, like cholesterol levels as a link between high fat consumption and cardiovascular disease: you should NOT adjust for cholesterol levels if you want to evaluate the risk of CD associated to high fat consumption). HTH, Marta García-Granero |
|
Hi again
I wrote > I have a certain feeling of "déjà vu". I think I replied to a similar > question (was it yours?), but focused on linear instead of logistic > regression, more or less a year ago. I have been trying to locate my > old message, but since I have a new computer and a different email > program it was impossible. I finally found it: http://listserv.uga.edu/cgi-bin/wa?A2=ind0507&L=spssx-l&P=R26656 Another item: I went directly to your question, and omitted the previous sentence: > I have just completed a logistic regression and have a p value for an > OR equal to .05, yet the confidence interval includes 1. This is absolutely logical: if the p-value is exactly 0.05, then the 95%CI should have 1 as lower limit. If your p-value is not exactly 0.05, but slightly higher (like 0.0501), then the 95%CI will include 1 between its limits (something like from 0.99999 to ....). However, there could be discrepancies if you considered the LR test p-value, not the Wald significance. In that case, you could have a significant p-value (because LR test is more sensitive than Wald test) and a "non-significant" 95% Wald CI. Wald test is considered too conservative. HTH, Marta García-Granero |
| Free forum by Nabble | Edit this page |
