question about comparing betas from two regression models

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

question about comparing betas from two regression models

Zdaniuk, Bozena-3

hello everybody,

I have a question about comparing betas.

I am running a regression predicting a treatment satisfaction variable (measured at t2, obviously) by pre-post treatment changes in two different predictors (I am defining the 'change' here as the effect of t2 measure of X while controlling for t1 measure of X). If enter t1 and t2 measures of predictor X1 into one model and then enter t1 and t2 measures of a different predictor (X2) into a separate model, can I then compare the standardized beta of t2 predictor X1 from the first model to the standardized beta of t2 predictor X2 in the second model and decide if  one of them has a greater association with the Y than  the other?

I think I need to run the separate models because X1 and X2 predictors are moderately correlated so in separate models each of them are highly significantly predictive of the treatment satisfaction but if I put both of them in the model (both t1 and t2 measures of both of them) the effects of t2 measures kind of wash out and nothing is significant.

Any pointers to anything online about comparing two different predictors from two different models predicting the same outcome (same sample) would also be greatly appreciated.

cheers,

bozena zdaniuk

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: question about comparing betas from two regression models

Jon Peck
You have nonnested hypotheses, which is a problem, but in this situation you might be interested in predictor importance and sensitivity to the other variable.  The STATS RELIMP
extension command, which requires the R plugin, shows you the importance of each predictor and how it is affected by the presence of the other(s).  There are several importance measures: I like the Shapley value, but there are six measures available (only three for categorical variables).  This does not give you significance tests, however.

On Fri, Oct 16, 2020 at 1:37 PM Zdaniuk, Bozena <[hidden email]> wrote:

hello everybody,

I have a question about comparing betas.

I am running a regression predicting a treatment satisfaction variable (measured at t2, obviously) by pre-post treatment changes in two different predictors (I am defining the 'change' here as the effect of t2 measure of X while controlling for t1 measure of X). If enter t1 and t2 measures of predictor X1 into one model and then enter t1 and t2 measures of a different predictor (X2) into a separate model, can I then compare the standardized beta of t2 predictor X1 from the first model to the standardized beta of t2 predictor X2 in the second model and decide if  one of them has a greater association with the Y than  the other?

I think I need to run the separate models because X1 and X2 predictors are moderately correlated so in separate models each of them are highly significantly predictive of the treatment satisfaction but if I put both of them in the model (both t1 and t2 measures of both of them) the effects of t2 measures kind of wash out and nothing is significant.

Any pointers to anything online about comparing two different predictors from two different models predicting the same outcome (same sample) would also be greatly appreciated.

cheers,

bozena zdaniuk

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD


--
Jon K Peck
[hidden email]

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: question about comparing betas from two regression models

Bruce Weaver
Administrator
In reply to this post by Zdaniuk, Bozena-3
Can you  post the two REGRESSION commands along with descriptions of what the
variables are?  I don't know about others, but I think I would find that
easier to follow than your description below.  ;-)  

Thanks.


Zdaniuk, Bozena-3 wrote

> hello everybody,
>
> I have a question about comparing betas.
>
> I am running a regression predicting a treatment satisfaction variable
> (measured at t2, obviously) by pre-post treatment changes in two different
> predictors (I am defining the 'change' here as the effect of t2 measure of
> X while controlling for t1 measure of X). If enter t1 and t2 measures of
> predictor X1 into one model and then enter t1 and t2 measures of a
> different predictor (X2) into a separate model, can I then compare the
> standardized beta of t2 predictor X1 from the first model to the
> standardized beta of t2 predictor X2 in the second model and decide if
> one of them has a greater association with the Y than  the other?
>
> I think I need to run the separate models because X1 and X2 predictors are
> moderately correlated so in separate models each of them are highly
> significantly predictive of the treatment satisfaction but if I put both
> of them in the model (both t1 and t2 measures of both of them) the effects
> of t2 measures kind of wash out and nothing is significant.
>
> Any pointers to anything online about comparing two different predictors
> from two different models predicting the same outcome (same sample) would
> also be greatly appreciated.
>
> cheers,
>
> bozena zdaniuk
>
> =====================
> To manage your subscription to SPSSX-L, send a message to

> LISTSERV@.UGA

>  (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD





-----
--
Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

NOTE: My Hotmail account is not monitored regularly.
To send me an e-mail, please use the address shown above.

--
Sent from: http://spssx-discussion.1045642.n5.nabble.com/

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: question about comparing betas from two regression models

Rich Ulrich
In reply to this post by Zdaniuk, Bozena-3
Jon starts out, "You have nonnested hypotheses, which is a problem, ...";
to which I add -- You are talking about /change scores/ and change scores
are famously "a problem" for representing what they are presumed to
represent.  Books have been written.  (BTW, I can say nothing in particular
about STATS RELIMP, but comparing "importance" is full of serious
assumptions.)

If every subject starts at the same low level, the reliable and valid
measure of change could be the score at T2.  The other choices are
simple change, (T2-T1), and some regressed version of change,
(T2- beta*T2).  Problems of equal-interval scales matter, too.  Should
scores be transformed?  And, transformations cannot help where
there is a basement effect or ceiling effect.   - When you look at the
three choices for Change, on individual cases, is any one of them
clearly apt?

As I read the problem, you want a direct comparison, and a test,
whether the change in A is more predictive than the change in B.
You /could/ compute the two Changes (regression-predicted or otherwise)
and enter them into a regression together.  I've seen that disparaged, but
I believe it was more for problems with "change scores" than for the math
in the ideal case.

Either or both being significant shows that they add something to the
other.  In the unlikely case that "both add something unique," the
result is not what you asked, but should be very relevant.


You are also talking about comparing "predictors" and that raises
problems of its own, especially for interpretation.  I'm thinking of the
tendency to conclude that the latent dimension of A is "more important"
than the latent dimension of B, even when B has been measured with
far less reliability.  The narrower conclusion is that "Our measure of A
is more predictive than what we have measuring B."

--
Rich Ulrich


From: SPSSX(r) Discussion <[hidden email]> on behalf of Zdaniuk, Bozena <[hidden email]>
Sent: Friday, October 16, 2020 3:36 PM
To: [hidden email] <[hidden email]>
Subject: question about comparing betas from two regression models
 

hello everybody,

I have a question about comparing betas.

I am running a regression predicting a treatment satisfaction variable (measured at t2, obviously) by pre-post treatment changes in two different predictors (I am defining the 'change' here as the effect of t2 measure of X while controlling for t1 measure of X). If enter t1 and t2 measures of predictor X1 into one model and then enter t1 and t2 measures of a different predictor (X2) into a separate model, can I then compare the standardized beta of t2 predictor X1 from the first model to the standardized beta of t2 predictor X2 in the second model and decide if  one of them has a greater association with the Y than  the other?

I think I need to run the separate models because X1 and X2 predictors are moderately correlated so in separate models each of them are highly significantly predictive of the treatment satisfaction but if I put both of them in the model (both t1 and t2 measures of both of them) the effects of t2 measures kind of wash out and nothing is significant.

Any pointers to anything online about comparing two different predictors from two different models predicting the same outcome (same sample) would also be greatly appreciated.

cheers,

bozena zdaniuk

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: question about comparing betas from two regression models

Jon Peck
The RELIMP Shapley Value measure for each independent variable is basically just an average of its incremental contribution to R2 when that variable is added to each possible combination of the other independent variables, optionally normalized  No more; no less.  The RELIMP output has the nice property that it also shows how each coefficient changes as a function of model size.

On Sat, Oct 17, 2020 at 1:19 PM Rich Ulrich <[hidden email]> wrote:
Jon starts out, "You have nonnested hypotheses, which is a problem, ...";
to which I add -- You are talking about /change scores/ and change scores
are famously "a problem" for representing what they are presumed to
represent.  Books have been written.  (BTW, I can say nothing in particular
about STATS RELIMP, but comparing "importance" is full of serious
assumptions.)

If every subject starts at the same low level, the reliable and valid
measure of change could be the score at T2.  The other choices are
simple change, (T2-T1), and some regressed version of change,
(T2- beta*T2).  Problems of equal-interval scales matter, too.  Should
scores be transformed?  And, transformations cannot help where
there is a basement effect or ceiling effect.   - When you look at the
three choices for Change, on individual cases, is any one of them
clearly apt?

As I read the problem, you want a direct comparison, and a test,
whether the change in A is more predictive than the change in B.
You /could/ compute the two Changes (regression-predicted or otherwise)
and enter them into a regression together.  I've seen that disparaged, but
I believe it was more for problems with "change scores" than for the math
in the ideal case.

Either or both being significant shows that they add something to the
other.  In the unlikely case that "both add something unique," the
result is not what you asked, but should be very relevant.


You are also talking about comparing "predictors" and that raises
problems of its own, especially for interpretation.  I'm thinking of the
tendency to conclude that the latent dimension of A is "more important"
than the latent dimension of B, even when B has been measured with
far less reliability.  The narrower conclusion is that "Our measure of A
is more predictive than what we have measuring B."

--
Rich Ulrich


From: SPSSX(r) Discussion <[hidden email]> on behalf of Zdaniuk, Bozena <[hidden email]>
Sent: Friday, October 16, 2020 3:36 PM
To: [hidden email] <[hidden email]>
Subject: question about comparing betas from two regression models
 

hello everybody,

I have a question about comparing betas.

I am running a regression predicting a treatment satisfaction variable (measured at t2, obviously) by pre-post treatment changes in two different predictors (I am defining the 'change' here as the effect of t2 measure of X while controlling for t1 measure of X). If enter t1 and t2 measures of predictor X1 into one model and then enter t1 and t2 measures of a different predictor (X2) into a separate model, can I then compare the standardized beta of t2 predictor X1 from the first model to the standardized beta of t2 predictor X2 in the second model and decide if  one of them has a greater association with the Y than  the other?

I think I need to run the separate models because X1 and X2 predictors are moderately correlated so in separate models each of them are highly significantly predictive of the treatment satisfaction but if I put both of them in the model (both t1 and t2 measures of both of them) the effects of t2 measures kind of wash out and nothing is significant.

Any pointers to anything online about comparing two different predictors from two different models predicting the same outcome (same sample) would also be greatly appreciated.

cheers,

bozena zdaniuk

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD


--
Jon K Peck
[hidden email]

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD