Dear SPSS users, I wonder why the following (regression stuff) are not available in the poin-click of SPSS? 1) Jarque-Bera
formula for testing normality of residuals in regression.
2) White
Heteroskedasticity test for regression analysis.
3) Ramsey
RESET (regression specification error test)
4) Chow
Breakpoint Test for testing structural stability. Syntax of such are welcome. Thank you. Eins |
Eins, According to Wikipedia, the Jarque-Bera (JB) statistic ~ chi-squared distribution with 2 degrees of freedom. The statistic is defined as: (n / 6) * [S^2 + (1/4) * (K - 3)^2] where n = sample size S = skew K = kurtosis Since -3 is added to kurtosis in SPSS to center on 0, the formula simplifies to: (n / 6) * [S^2 + (1/4) * K^2] Now, calculate the JB statistic on the residuals and use 1 - cdf.chisq in COMPUTE to obtain the p-value. I do not have time to discuss the other tests at the moment. Ryan
|
Administrator
|
Probably easiest would be to use OMS with DESCRIPTIVES to toss the K S and N values into a dataset then apply the simple formula that Ryan posted.
Eins: Next time post references so others don't have to track down formulas. I looked up the others but I am not willing to program them for free since I really don't have any particular use for them in my own work!
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me. --- "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis." Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?" |
In reply to this post by Ryan
Adding to Ryan's comments and leaving aside
the issue of whether testing for normality is a good idea, there is evidence
that the JB test has poor small-sample properties. E.g.,
http://ocw.metu.edu.tr/mod/resource/view.php?id=2707&redirect=1 "The chi-square approximation, however, is overly sensitive for small samples, rejecting the null hypothesis often when it is in fact true. Furthermore, the distribution of p-values departs from a uniform distribution and becomes a right-skewed uni-modal distribution, especially for small p-values. This leads to a large Type I error rate. " and https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=8&cad=rja&ved=0CGoQFjAH&url=http%3A%2F%2Fhj.se%2Fdownload%2F18.3bf8114412e804c78638000150%2F1299244445855%2FWP2010-8.pdf&ei=qrHEUte1A4LcyQHVnoHoCw&usg=AFQjCNEm_tN1Z2NMBTVtzQa9DcE1HTpx9w&sig2=VVFoZxVUplcIIS5lfUe0PA&bvm=bv.58187178,d.aWc "The JB statistic has an asymptotic chi-square distribution with two degrees of freedom. Mantalos (2010) in a Monte Carlo study showed by using three different definitions (estimates) of the sample skewness and kurtosis, that the JB has rather poor small sample properties, the slow convergence of the test statistic to its limiting distribution, makes the test over-sized for small nominal level and under-sized for larger than 3% levels even in a reasonably large sample. Even the power of the tests shows the same erratic form. While it is easy to compute this statistic using a trivial program in Statistics, Statistics does provide Anderson-Darling and K-S normality tests as well as Q-Q plots that may be more informative about deviations from normality. And GLM/UNIANOVA provides a Levene test of equality of error variance. I would also note that the SPSSINC BREUSCH PAGAN extension command (Analyze>Regression>Residual Heteroscedasticity) allows for residual variance models as well as a simple heteroscedasticity test and has a dialog box interface. Jon Peck (no "h") aka Kim Senior Software Engineer, IBM [hidden email] phone: 720-342-5621 From: Ryan Black <[hidden email]> To: [hidden email], Date: 01/01/2014 04:20 PM Subject: Re: [SPSSX-L] Why SPSS is not complete? Sent by: "SPSSX(r) Discussion" <[hidden email]> Eins, According to Wikipedia, the Jarque-Bera (JB) statistic ~ chi-squared distribution with 2 degrees of freedom. The statistic is defined as: (n / 6) * [S^2 + (1/4) * (K - 3)^2] where n = sample size S = skew K = kurtosis Since -3 is added to kurtosis in SPSS to center on 0, the formula simplifies to: (n / 6) * [S^2 + (1/4) * K^2] Now, calculate the JB statistic on the residuals and use 1 - cdf.chisq in COMPUTE to obtain the p-value. I do not have time to discuss the other tests at the moment. Ryan On Jan 1, 2014, at 9:51 AM, "E. Bernardo" <einsbernardo@...> wrote: Dear SPSS users, I wonder why the following (regression stuff) are not available in the poin-click of SPSS? 1) Jarque-Bera formula for testing normality of residuals in regression. 2) White Heteroskedasticity test for regression analysis. 3) Ramsey RESET (regression specification error test) 4) Chow Breakpoint Test for testing structural stability. Syntax of such are welcome. Thank you. Eins |
Free forum by Nabble | Edit this page |