Hi,
I have an interesting problem I was hoping someone could offer me some advice with. I'm currently running binary logistic regression on data (n=62) but in one case (1 dependent, 1 independent both dichotomous) test of the coefficients using the wald chi square test gives a different result than the overall test of the model using the likelihood chi square test. As the model is significant according to the latter (p=.048) yet the sole independent variable is insignificant according the former (p=.071), I'm not sure which one to run with. If anyone knows the appropriate course of action, and could let me know, I'd really appreciate it. The only info I've been able to find is There are a few other things to note about the output below. The first is that although we have only one predictor variable, the test for the odds ratio does not match with the overall test of the model. This is because the test of the coefficient is a Wald chi-square test, while the test of the overall model is a likelihood ratio chi-square test. While these two types of chi-square tests are asymptotically equivalent, in small samples they can differ, as they do here. Also, we have the unfortunate situation in which the results of the two tests give different conclusions. This does not happen very often. In a situation like this, it is difficult to know what to conclude. One might consider the power, or one might decide if an odds ratio of this magnitude is important from a clinical or practical standpoint. Cheers, John |
I think that the theoretical position of the overall test is superior to that of
the Wald test, so I have never used the Wald test when there was an overall test of the same hypothesis which uses either reduction of SS or improvement of likelihood. The Wald test is the ratio of the coefficient divided by an approximation of its standard error... which I have always assumed is more vulnerable to failures of assumptions. In this case, I would start with the fact that this is a 2x2 table, and I would want to know that it meets the assumptions for expectations for the cells in order to use the simple chi-squared test. And Bruce Weaver just lately pointed out to the list that you can get the (N-1) version of that, for a 2x2 table, by looking at the Mantel linear-contrast test in Crosstabs. -- Rich Ulrich > Date: Thu, 25 Jul 2013 19:06:27 -0700 > From: [hidden email] > Subject: Wald chi-square test vs likelihood ratio chi-square test. Which to accept in Binary Logistic Regression > To: [hidden email] > > Hi, > > I have an interesting problem I was hoping someone could offer me some > advice with. > > I'm currently running binary logistic regression on data (n=62) but in one > case (1 dependent, 1 independent both dichotomous) test of the coefficients > using the wald chi square test gives a different result than the overall > test of the model using the likelihood chi square test. > > As the model is significant according to the latter (p=.048) yet the sole > independent variable is insignificant according the former (p=.071), I'm not > sure which one to run with. > > If anyone knows the appropriate course of action, and could let me know, I'd > really appreciate it. > > The only info I've been able to find is > > There are a few other things to note about the output below. The first is > that although we have only one predictor variable, the test for the odds > ratio does not match with the overall test of the model. This is because > the test of the coefficient is a Wald chi-square test, while the test of the > overall model is a likelihood ratio chi-square test. While these two types > of chi-square tests are asymptotically equivalent, in small samples they can > differ, as they do here. Also, we have the unfortunate situation in which > the results of the two tests give different conclusions. This does not > happen very often. In a situation like this, it is difficult to know what > to conclude. One might consider the power, or one might decide if an odds > ratio of this magnitude is important from a clinical or practical > standpoint. |
Hi Jonny, The chi-square for the overall model tests whether the null hypothesis (that the model provides a good fit to the data) is true. And as the statistic is significant the null is rejected in favor of the alternative (that the model does not provide a good fit to the data) -- thus the information for both tests is consistent.
Best, James On Fri, Jul 26, 2013 at 12:12 AM, Rich Ulrich <[hidden email]> wrote:
James C. Whanger
|
James,
I'm sorry, but I am not sure that I follow. I do see that *when* there is an overall test that leaves out both the mean and the parameter of interest, then you could have an overall "fit" that is poor, of the sort where the non-fit is divided between the mean and the group parameter. Thus, the overall test could be significant while the Group test is not -- because the Mean has not been accounted for. I think you can get detailed testing, which includes tests on means, from log-linear modeling. Various models include a Wald test, sometimes where there is no test on that parameter for the improvement of likelihood. But when you have both tests on the same parameter, I think that the test on improvement is the more robust test. (The difference in two tests on the same thing always warns you that you may be "pushing" your assumptions too far.) I'm not looking at any printout, but I did not think that any Logistic Regression routine showed the overall fit without the inclusion of the mean. If the overall test is a 1 d.f. test, then it compares the null model (with mean) to the fitted model with a group parameter, and it is exactly the same hypothesis being tested. In that case, the disparity between 0.048 and 0.071 is an actual inconsistency, and not difference attributable to an unreported test on the mean being 50%. -- Rich Ulrich Date: Fri, 26 Jul 2013 08:05:55 -0400 From: [hidden email] Subject: Re: Wald chi-square test vs likelihood ratio chi-square test. Which to accept in Binary Logistic Regression To: [hidden email] Hi Jonny, The chi-square for the overall model tests whether the null hypothesis (that the model provides a good fit to the data) is true. And as the statistic is significant the null is rejected in favor of the alternative (that the model does not provide a good fit to the data) -- thus the information for both tests is consistent.
Best, James On Fri, Jul 26, 2013 at 12:12 AM, Rich Ulrich <[hidden email]> wrote:
James C. Whanger
|
My apologies Rich. You are absolutely correct. I was confusing the likelihood chi-square test you described with the omnibus chi square test used in structural equation modeling. For that statistic, the sample covariance matrix is compared to the estimated population (or model implied) covariance matrix and a lack of difference is evidence that the sample covariance (observed) matrix is consistent with the population and that the data produced by the model is consistent with what would be expected. Null Hypothesis: Sigma = Sigma(Theta).
On Sat, Jul 27, 2013 at 5:53 PM, Rich Ulrich <[hidden email]> wrote:
James C. Whanger
|
Free forum by Nabble | Edit this page |