I wanted to get clarification regarding the advantage of hierarchical vs. simultaneous regression.
I ran a regression analysis, one version hierarchical and the other simultaneous. for the hierarchical, I entered the demographic covariates in the first block, and my main predictor variables in the second block. For the simultaneous, all variables were entered into the first block. I noticed that the output for both regressions was the same. The overall model F-test was the same, the amount of adjusted R2 variance accounted for was the same, and the standardized beta coefficients and corresponding t-test results were the same for both the hierarchical and simultaneous regression. For the hierarchical, there is the output of the additional R2 variance accounted for by the second block and its corresponding F-test. That is the only difference I see between the two regression versions. Also, I noticed that when I squared the beta coefficients for the main predictor variables in the second block, they almost equaled the R2 change in variance from block one to block two (the difference may have been due to rounding error). My question is, is the main advantage of the hierarchical regression is that it tests if the additional variables in the subsequent blocks adds an additional amount of R2 change in variance, and that the f-test for that block will indicate whether or not that change is significant? I thought the hierarchical regression held the variables in the previous blocks constant, and I was under the impression that those coefficients for the previous blocks would not change with the addition of predictors in subsequent blocks, but my comparison of the hierarchical and simultaneous regression outputs suggest otherwise. Are there other advantages to the hierarchical regression that I am not aware of, other than the hierarchical regression allows for an F-test to see if the additional variance accounted for by the variables in subsequent blocks is significant? Thank you in advance for responses. pj |
"My question is, is the main advantage of the hierarchical regression is that
it tests if the additional variables in the subsequent blocks adds an additional amount of R2 change in variance, and that the f-test for that block will indicate whether or not that change is significant?" Your analysis is correct. As far as I know it is not possible in SPSS to add a block of variables to a regression model while keeping the regression weights of the variables already in the model constant. This is possible within the framework of structural equation modeling though in programs like Lisrel and EQS. Regards, Paul Oosterveld ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
In reply to this post by pji
See below.
> Date: Sun, 29 Apr 2012 23:17:14 -0700 > From: [hidden email] > Subject: advantage of hierarchical vs. simultaneous regression > To: [hidden email] > > I wanted to get clarification regarding the advantage of hierarchical vs. > simultaneous regression. > > I ran a regression analysis, one version hierarchical and the other > simultaneous. for the hierarchical, I entered the demographic covariates in > the first block, and my main predictor variables in the second block. For > the simultaneous, all variables were entered into the first block. I noticed > that the output for both regressions was the same. The overall model F-test > was the same, the amount of adjusted R2 variance accounted for was the same, > and the standardized beta coefficients and corresponding t-test results were > the same for both the hierarchical and simultaneous regression. For the That is as it should be. > hierarchical, there is the output of the additional R2 variance accounted > for by the second block and its corresponding F-test. That is the only > difference I see between the two regression versions. Also, I noticed that > when I squared the beta coefficients for the main predictor variables in the > second block, they almost equaled the R2 change in variance from block one > to block two (the difference may have been due to rounding error). That is not always the case. It is possible for variables to be in a "suppressor" relationship, where there are larger betas, even larger than +/- 1.0. The main use that I make of betas is to confirm that they *are* close to the zero-order correlation, in order to confirm that variables are *not* affecting each other strongly. > > My question is, is the main advantage of the hierarchical regression is that > it tests if the additional variables in the subsequent blocks adds an > additional amount of R2 change in variance, and that the f-test for that > block will indicate whether or not that change is significant? I thought > the hierarchical regression held the variables in the previous blocks > constant, and I was under the impression that those coefficients for the > previous blocks would not change with the addition of predictors in > subsequent blocks, but my comparison of the hierarchical and simultaneous > regression outputs suggest otherwise. Are there other advantages to the > hierarchical regression that I am not aware of, other than the hierarchical > regression allows for an F-test to see if the additional variance accounted > for by the variables in subsequent blocks is significant? Yes. The overall test performed on the block is what you gain. It is possible to save the residuals from the first block and perform a regression on those residuals using the second block; but that is not, in general, a recommended procedure. I think of it as a way to keep a larger d.f. in the residual, for a small sample. If it works well, it works rather like having a "propensity score" from the first block, instead of entering them as separate variables. > > Thank you in advance for responses. ... -- Rich Ulrich |
In reply to this post by Paul Oosterveld
thnk you for your email response and confirming my thoughts about regression.
p
Peter Ji, Ph.D.
Core Faculty Adler School of Professional Psychology 17 North Dearborn Street Chicago, IL 60602 From: Paul Oosterveld [via SPSSX Discussion] [[hidden email]] Sent: Tuesday, May 01, 2012 8:53 AM To: Ji, Peter Subject: Re: advantage of hierarchical vs. simultaneous regression "My question is, is the main advantage of the hierarchical regression is that
it tests if the additional variables in the subsequent blocks adds an additional amount of R2 change in variance, and that the f-test for that block will indicate whether or not that change is significant?" Your analysis is correct. As far as I know it is not possible in SPSS to add a block of variables to a regression model while keeping the regression weights of the variables already in the model constant. This is possible within the framework of structural equation modeling though in programs like Lisrel and EQS. Regards, Paul Oosterveld ===================== To manage your subscription to SPSSX-L, send a message to <A href="thismessage:/user/SendEmail.jtp?type=node&node=5678103&i=0" rel=nofollow target=_blank>[hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD If you reply to this email, your message will be added to the discussion below: http://spssx-discussion.1045642.n5.nabble.com/advantage-of-hierarchical-vs-simultaneous-regression-tp5675183p5678103.html |
Free forum by Nabble | Edit this page |