Comparing regression coefficients? Is it allowed?

classic Classic list List threaded Threaded
16 messages Options
Reply | Threaded
Open this post in threaded view
|

Comparing regression coefficients? Is it allowed?

Yuri
Dear Everyone! :D

Simply put I have A, and B predicting Y (with a bunch of other variables as well).

A = leadership style 1 (called transformational leadership).
B = leadership style 2 (called servant leadership).
Y = turnover intention (whether you want to leave your job).

Running my first model (containing A predicting Y) I get the following output (among other things) for Y as an outcome:
   coefficient=-.4162, se=.1642, t=-2,5342, p=.0126

Running my second model (containing B predicting Y) I get the following output (among other things) for Y as an outcome:
    coefficient=-.3042, se=.1388, t=-2.19, p=.0304

So I'm wondering, can I say the coefficient is stronger negatively related to Y? I mean it's more significant...

How could I compare them?

Best,

Yuri
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Rich Ulrich
You probably want to quit with the observation that, although one
has a p-value that is slightly smaller, the difference is small and well-within
sampling variability.  Putting a number to that variability would be awkward,
but could be done with /special-purpose/ bootstrapping (i.e., not one that SPSS is
programmed to provide).

Another way to show their lack of difference would be to run the equation
with both ... and point to the fact - assuming that it is true - that neither one
remains "significant" when the contribution of the other one is taken into account.
This approach is probably a better one for comparing the two predictors, than saying
that "one coefficient is larger". 

The power of the comparison will be stronger when one predictor has nothing
unique to contribute:  That is, if one variable is a much more reliable predictor
of the outcome in question, then it /might/ still have something to add to the other.

--
Rich Ulrich


> Date: Tue, 14 Jun 2016 14:15:09 -0700

> From: [hidden email]
> Subject: Comparing regression coefficients? Is it allowed?
> To: [hidden email]
>
> Dear Everyone! :D
>
> Simply put I have A, and B predicting Y (with a bunch of other variables as
> well).
>
> A = leadership style 1 (called transformational leadership).
> B = leadership style 2 (called servant leadership).
> Y = turnover intention (whether you want to leave your job).
>
> Running my first model (containing A predicting Y) I get the following
> output (among other things) for Y as an outcome:
> coefficient=-.4162, se=.1642, t=-2,5342, p=.0126
>
> Running my second model (containing B predicting Y) I get the following
> output (among other things) for Y as an outcome:
> coefficient=-.3042, se=.1388, t=-2.19, p=.0304
>
> So I'm wondering, can I say the coefficient is stronger negatively related
> to Y? I mean it's more significant...
>
> How could I compare them?
>
> Best,
>

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Bruce Weaver
Administrator
Surely one would want to have both A and B (and possibly A*B) in the same model, would they not?  What is known about the relationship between A and B?


Rich Ulrich wrote
You probably want to quit with the observation that, although one
has a p-value that is slightly smaller, the difference is small and well-within
sampling variability.  Putting a number to that variability would be awkward,
but could be done with /special-purpose/ bootstrapping (i.e., not one that SPSS is
programmed to provide).

Another way to show their lack of difference would be to run the equation
with both ... and point to the fact - assuming that it is true - that neither one
remains "significant" when the contribution of the other one is taken into account.
This approach is probably a better one for comparing the two predictors, than saying
that "one coefficient is larger".  

The power of the comparison will be stronger when one predictor has nothing
unique to contribute:  That is, if one variable is a much more reliable predictor
of the outcome in question, then it /might/ still have something to add to the other.

--
Rich Ulrich


> Date: Tue, 14 Jun 2016 14:15:09 -0700
> From: [hidden email]
> Subject: Comparing regression coefficients? Is it allowed?
> To: [hidden email]
>
> Dear Everyone! :D
>
> Simply put I have A, and B predicting Y (with a bunch of other variables as
> well).
>
> A = leadership style 1 (called transformational leadership).
> B = leadership style 2 (called servant leadership).
> Y = turnover intention (whether you want to leave your job).
>
> Running my first model (containing A predicting Y) I get the following
> output (among other things) for Y as an outcome:
>    coefficient=-.4162, se=.1642, t=-2,5342, p=.0126
>
> Running my second model (containing B predicting Y) I get the following
> output (among other things) for Y as an outcome:
>     coefficient=-.3042, se=.1388, t=-2.19, p=.0304
>
> So I'm wondering, can I say the coefficient is stronger negatively related
> to Y? I mean it's more significant...
>
> How could I compare them?
>
> Best,
>

     
=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Andy W
In reply to this post by Yuri
To reinforce Rich's comments, I frequently recommend the paper by Andrew Gelman and Hal Stern, *The difference between “significant” and “not significant” is not itself statistically significant* in the American Statistician (pre-print here, http://www.stat.columbia.edu/~gelman/research/unpublished/signif3.pdf).

It is a common mistake I see in papers. It most often happens when one estimate has a p-value of above 0.05 (say 0.09) and the other has a value of below 0.05. The same logic applies to your situation.

It could be relevant to estimate whether the effects are equal. That is the null of the coefficients A = B. You need an estimate of the variance (the standard errors squared) and covariance between those two coefficients though, which can be printed out in whatever regression model you are using.

The test statistic for the test of equality is then simply:

(A - B)/SQRT(Var(A) + Var(B) - 2*Cov(A,B))

Where A and B are the coefficient estimates, Var(x) is the variance of those coefficients, and Cov(A,B) is there covariance. This then follows a standard normal distribution.

Here is a code snippet in SPSS doing this (just by copying & pasting the appropriate values from the tables).

*****************************************************************.
*Example of testing the equality between two coefficients
*from the same model.
*See http://stats.stackexchange.com/a/59093/1036.
*And http://stats.stackexchange.com/a/211597/1036.
*For additional references.

DATASET CLOSE ALL.
OUTPUT CLOSE ALL.

*Creating simulated data.
SET SEED 10.
INPUT PROGRAM.
LOOP Id = 1 TO 100.
END CASE.
END LOOP.
END FILE.
END INPUT PROGRAM.
DATASET NAME Sim.
COMPUTE A = RV.NORMAL(0,1).
COMPUTE B = A*-0.5 + RV.NORMAL(0,SQRT(0.5)).
COMPUTE Y = 3 + 0.4*A + 0.4*B + RV.NORMAL(0,1).
EXECUTE.

REGRESSION
  /MISSING LISTWISE
  /STATISTICS COEFF OUTS BCOV R ANOVA
  /CRITERIA=PIN(.05) POUT(.10)
  /NOORIGIN
  /DEPENDENT Y
  /METHOD=ENTER A B.

*Coefficient estimates - copy-pasted from tables.
COMPUTE #A = 0.316519.
COMPUTE #B = 0.285981.
*Standard errors and covariance copy-pasted.
COMPUTE #SE = SQRT( 0.105751**2 + 0.111585**2 - 2*0.006227 ).
*Test statistic, standard normal distribution.
COMPUTE Test_Z = (#A - #B)/#SE.
*Two tailed p-value.
COMPUTE PVal = 2*CDF.NORMAL(-ABS(Test_Z),0,1).
EXECUTE.
*****************************************************************.

This could be standardized via OMS or Python, but I don't need to do this often enough for me to worry about it.
Andy W
apwheele@gmail.com
http://andrewpwheeler.wordpress.com/
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Ryan
The test of difference in coefficients can be easily accomplished using the TEST subcommand of MIXED. I believe examples have been posted previously.

Ryan

Sent from my iPhone

> On Jun 15, 2016, at 9:01 AM, Andy W <[hidden email]> wrote:
>
> To reinforce Rich's comments, I frequently recommend the paper by Andrew
> Gelman and Hal Stern, *The difference between “significant” and “not
> significant” is not itself statistically significant* in the American
> Statistician (pre-print here,
> http://www.stat.columbia.edu/~gelman/research/unpublished/signif3.pdf).
>
> It is a common mistake I see in papers. It most often happens when one
> estimate has a p-value of above 0.05 (say 0.09) and the other has a value of
> below 0.05. The same logic applies to your situation.
>
> It could be relevant to estimate whether the effects are equal. That is the
> null of the coefficients A = B. You need an estimate of the variance (the
> standard errors squared) and covariance between those two coefficients
> though, which can be printed out in whatever regression model you are using.
>
> The test statistic for the test of equality is then simply:
>
> (A - B)/SQRT(Var(A) + Var(B) - 2*Cov(A,B))
>
> Where A and B are the coefficient estimates, Var(x) is the variance of those
> coefficients, and Cov(A,B) is there covariance. This then follows a standard
> normal distribution.
>
> Here is a code snippet in SPSS doing this (just by copying & pasting the
> appropriate values from the tables).
>
> *****************************************************************.
> *Example of testing the equality between two coefficients
> *from the same model.
> *See http://stats.stackexchange.com/a/59093/1036.
> *And http://stats.stackexchange.com/a/211597/1036.
> *For additional references.
>
> DATASET CLOSE ALL.
> OUTPUT CLOSE ALL.
>
> *Creating simulated data.
> SET SEED 10.
> INPUT PROGRAM.
> LOOP Id = 1 TO 100.
> END CASE.
> END LOOP.
> END FILE.
> END INPUT PROGRAM.
> DATASET NAME Sim.
> COMPUTE A = RV.NORMAL(0,1).
> COMPUTE B = A*-0.5 + RV.NORMAL(0,SQRT(0.5)).
> COMPUTE Y = 3 + 0.4*A + 0.4*B + RV.NORMAL(0,1).
> EXECUTE.
>
> REGRESSION
>  /MISSING LISTWISE
>  /STATISTICS COEFF OUTS BCOV R ANOVA
>  /CRITERIA=PIN(.05) POUT(.10)
>  /NOORIGIN
>  /DEPENDENT Y
>  /METHOD=ENTER A B.
>
> *Coefficient estimates - copy-pasted from tables.
> COMPUTE #A = 0.316519.
> COMPUTE #B = 0.285981.
> *Standard errors and covariance copy-pasted.
> COMPUTE #SE = SQRT( 0.105751**2 + 0.111585**2 - 2*0.006227 ).
> *Test statistic, standard normal distribution.
> COMPUTE Test_Z = (#A - #B)/#SE.
> *Two tailed p-value.
> COMPUTE PVal = 2*CDF.NORMAL(-ABS(Test_Z),0,1).
> EXECUTE.
> *****************************************************************.
>
> This could be standardized via OMS or Python, but I don't need to do this
> often enough for me to worry about it.
>
>
>
> -----
> Andy W
> [hidden email]
> http://andrewpwheeler.wordpress.com/
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Comparing-regression-coefficients-Is-it-allowed-tp5732403p5732406.html
> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Mike
In reply to this post by Bruce Weaver
To expand on Bruce's points below, I could be wrong but I
think the OP is asking the wrong question and that one
needs to know the following:

(1) Is the N for the equation with A = N for the equation with B?
If not, the differences in p-values can be explained by the
difference in Ns.

(2) Are there other variables in the equation beside A or B?

(3) What are the following correlations:
(a) r(A,Y)
(b) r(B,Y)
(c) r(A,B)

(4) Doesn't the OP really want to know if the increase in R^2
for A (i.e., the semipartial r for A,Y) the same or different than
the increase in R^2 for B (semipartial for B,Y)?  As Bruce
suggests below, wouldn't one want an equation with both
A and B to see which has the greater increase (if r(A,B) = 0.00,
then it probably doesn't matter but I doubt that the correlation
is zero). The interaction term A*B, if significant, also has to
be taken into account.

(5) The question asked by the OP confuses the probability
associated with a test statistic with an effect size.  Isn't the
question the OP really asking "is the magnitude of the effect
of A the same as the magnitude of the effect of B"?

-Mike Palij
New York University
[hidden email]

----- Original Message -----
From: "Bruce Weaver" <[hidden email]>
To: <[hidden email]>
Sent: Wednesday, June 15, 2016 8:49 AM
Subject: Re: Comparing regression coefficients? Is it allowed?


> Surely one would want to have both A and B (and possibly A*B) in the
> same
> model, would they not?  What is known about the relationship between A
> and
> B?
>
>
>
> Rich Ulrich wrote
>> You probably want to quit with the observation that, although one
>> has a p-value that is slightly smaller, the difference is small and
>> well-within
>> sampling variability.  Putting a number to that variability would be
>> awkward,
>> but could be done with /special-purpose/ bootstrapping (i.e., not one
>> that
>> SPSS is
>> programmed to provide).
>>
>> Another way to show their lack of difference would be to run the
>> equation
>> with both ... and point to the fact - assuming that it is true - that
>> neither one
>> remains "significant" when the contribution of the other one is taken
>> into
>> account.
>> This approach is probably a better one for comparing the two
>> predictors,
>> than saying
>> that "one coefficient is larger".
>>
>> The power of the comparison will be stronger when one predictor has
>> nothing
>> unique to contribute:  That is, if one variable is a much more
>> reliable
>> predictor
>> of the outcome in question, then it /might/ still have something to
>> add to
>> the other.
>>
>> --
>> Rich Ulrich
>>
>>
>>> Date: Tue, 14 Jun 2016 14:15:09 -0700
>>> From:
>
>> yurischarp@
>
>>> Subject: Comparing regression coefficients? Is it allowed?
>>> To:
>
>> SPSSX-L@.UGA
>
>>>
>>> Dear Everyone! :D
>>>
>>> Simply put I have A, and B predicting Y (with a bunch of other
>>> variables
>>> as
>>> well).
>>>
>>> A = leadership style 1 (called transformational leadership).
>>> B = leadership style 2 (called servant leadership).
>>> Y = turnover intention (whether you want to leave your job).
>>>
>>> Running my first model (containing A predicting Y) I get the
>>> following
>>> output (among other things) for Y as an outcome:
>>>    coefficient=-.4162, se=.1642, t=-2,5342, p=.0126
>>>
>>> Running my second model (containing B predicting Y) I get the
>>> following
>>> output (among other things) for Y as an outcome:
>>>     coefficient=-.3042, se=.1388, t=-2.19, p=.0304
>>>
>>> So I'm wondering, can I say the coefficient is stronger negatively
>>> related
>>> to Y? I mean it's more significant...
>>>
>>> How could I compare them?
>>>
>>> Best,

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Jon Peck
As we would all agree, statistical significance is not a measure of variable importance or strength.  While there is no universally accepted definition of importance, the STATS RELIMP extension command can compute six importance metrics that may be helpful.

Shapley Value: the incremental R2 for the variable averaged over all models. This is referred to as LMG in the output.

First: The R2 for the variable when entered first

Last: The incremental R2 when the variable is entered last

Beta Sq: The square of the standardized coefficient. These values would sum to the R2 if the independent variables were uncorrelated.

Pratt: The standardized coefficient times the correlation

CAR: Marginal correlations after adjusting for correlations among the regressors.

--------

The most interesting IMO is the Shapley value, although it is computationally expensive,, as you might imagine.

While this may be overkill for the original question, this extension command is generally pretty useful.  

The procedure also reports the average regression coefficient for each regressor for each possible number of explanatory variables.  Here is an example (sorry for the formatting).

Average Coefficients by Model Size

1group 2groups 3groups 4groups

jobtime 142.723 151.540 158.374 163.876

prevexp -15.913 -16.458 -19.366 -24.718

salbegin 1.909 1.745 1.606 1.491

jobcatCustodial 3,100.349 3,399.339 4,633.179 6,823.174

jobcatManager 36,139.258 28,571.821 20,475.486 11,844.917

STATS RELIMP is included in the R Essentials for Statistics.


On Wed, Jun 15, 2016 at 7:52 AM, Mike Palij <[hidden email]> wrote:
To expand on Bruce's points below, I could be wrong but I
think the OP is asking the wrong question and that one
needs to know the following:

(1) Is the N for the equation with A = N for the equation with B?
If not, the differences in p-values can be explained by the
difference in Ns.

(2) Are there other variables in the equation beside A or B?

(3) What are the following correlations:
(a) r(A,Y)
(b) r(B,Y)
(c) r(A,B)

(4) Doesn't the OP really want to know if the increase in R^2
for A (i.e., the semipartial r for A,Y) the same or different than
the increase in R^2 for B (semipartial for B,Y)?  As Bruce
suggests below, wouldn't one want an equation with both
A and B to see which has the greater increase (if r(A,B) = 0.00,
then it probably doesn't matter but I doubt that the correlation
is zero). The interaction term A*B, if significant, also has to
be taken into account.

(5) The question asked by the OP confuses the probability
associated with a test statistic with an effect size.  Isn't the
question the OP really asking "is the magnitude of the effect
of A the same as the magnitude of the effect of B"?

-Mike Palij
New York University
[hidden email]

----- Original Message ----- From: "Bruce Weaver" <[hidden email]>
To: <[hidden email]>
Sent: Wednesday, June 15, 2016 8:49 AM
Subject: Re: Comparing regression coefficients? Is it allowed?



Surely one would want to have both A and B (and possibly A*B) in the same
model, would they not?  What is known about the relationship between A and
B?



Rich Ulrich wrote
You probably want to quit with the observation that, although one
has a p-value that is slightly smaller, the difference is small and
well-within
sampling variability.  Putting a number to that variability would be
awkward,
but could be done with /special-purpose/ bootstrapping (i.e., not one that
SPSS is
programmed to provide).

Another way to show their lack of difference would be to run the equation
with both ... and point to the fact - assuming that it is true - that
neither one
remains "significant" when the contribution of the other one is taken into
account.
This approach is probably a better one for comparing the two predictors,
than saying
that "one coefficient is larger".

The power of the comparison will be stronger when one predictor has
nothing
unique to contribute:  That is, if one variable is a much more reliable
predictor
of the outcome in question, then it /might/ still have something to add to
the other.

--
Rich Ulrich


Date: Tue, 14 Jun 2016 14:15:09 -0700
From:

yurischarp@

Subject: Comparing regression coefficients? Is it allowed?
To:

SPSSX-L@.UGA


Dear Everyone! :D

Simply put I have A, and B predicting Y (with a bunch of other variables
as
well).

A = leadership style 1 (called transformational leadership).
B = leadership style 2 (called servant leadership).
Y = turnover intention (whether you want to leave your job).

Running my first model (containing A predicting Y) I get the following
output (among other things) for Y as an outcome:
   coefficient=-.4162, se=.1642, t=-2,5342, p=.0126

Running my second model (containing B predicting Y) I get the following
output (among other things) for Y as an outcome:
    coefficient=-.3042, se=.1388, t=-2.19, p=.0304

So I'm wondering, can I say the coefficient is stronger negatively
related
to Y? I mean it's more significant...

How could I compare them?

Best,

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD



--
Jon K Peck
[hidden email]

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Mike

Jon,
 
Very nice.  Thanks.
 
-Mike Palij
New York University
 
----- Original Message -----
Sent: Wednesday, June 15, 2016 12:41 PM
Subject: Re: [SPSSX-L] Comparing regression coefficients? Is it allowed?

As we would all agree, statistical significance is not a measure of variable importance or strength.  While there is no universally accepted definition of importance, the STATS RELIMP extension command can compute six importance metrics that may be helpful.

Shapley Value: the incremental R2 for the variable averaged over all models. This is referred to as LMG in the output.

First: The R2 for the variable when entered first

Last: The incremental R2 when the variable is entered last

Beta Sq: The square of the standardized coefficient. These values would sum to the R2 if the independent variables were uncorrelated.

Pratt: The standardized coefficient times the correlation

CAR: Marginal correlations after adjusting for correlations among the regressors.

--------

The most interesting IMO is the Shapley value, although it is computationally expensive,, as you might imagine.

While this may be overkill for the original question, this extension command is generally pretty useful.  

The procedure also reports the average regression coefficient for each regressor for each possible number of explanatory variables.  Here is an example (sorry for the formatting).

Average Coefficients by Model Size

1group 2groups 3groups 4groups

jobtime 142.723 151.540 158.374 163.876

prevexp -15.913 -16.458 -19.366 -24.718

salbegin 1.909 1.745 1.606 1.491

jobcatCustodial 3,100.349 3,399.339 4,633.179 6,823.174

jobcatManager 36,139.258 28,571.821 20,475.486 11,844.917

STATS RELIMP is included in the R Essentials for Statistics.


On Wed, Jun 15, 2016 at 7:52 AM, Mike Palij <[hidden email]> wrote:
To expand on Bruce's points below, I could be wrong but I
think the OP is asking the wrong question and that one
needs to know the following:

(1) Is the N for the equation with A = N for the equation with B?
If not, the differences in p-values can be explained by the
difference in Ns.

(2) Are there other variables in the equation beside A or B?

(3) What are the following correlations:
(a) r(A,Y)
(b) r(B,Y)
(c) r(A,B)

(4) Doesn't the OP really want to know if the increase in R^2
for A (i.e., the semipartial r for A,Y) the same or different than
the increase in R^2 for B (semipartial for B,Y)?  As Bruce
suggests below, wouldn't one want an equation with both
A and B to see which has the greater increase (if r(A,B) = 0.00,
then it probably doesn't matter but I doubt that the correlation
is zero). The interaction term A*B, if significant, also has to
be taken into account.

(5) The question asked by the OP confuses the probability
associated with a test statistic with an effect size.  Isn't the
question the OP really asking "is the magnitude of the effect
of A the same as the magnitude of the effect of B"?

-Mike Palij
New York University
[hidden email]

----- Original Message ----- From: "Bruce Weaver" <[hidden email]>
To: <[hidden email]>
Sent: Wednesday, June 15, 2016 8:49 AM
Subject: Re: Comparing regression coefficients? Is it allowed?



Surely one would want to have both A and B (and possibly A*B) in the same
model, would they not?  What is known about the relationship between A and
B?



Rich Ulrich wrote
You probably want to quit with the observation that, although one
has a p-value that is slightly smaller, the difference is small and
well-within
sampling variability.  Putting a number to that variability would be
awkward,
but could be done with /special-purpose/ bootstrapping (i.e., not one that
SPSS is
programmed to provide).

Another way to show their lack of difference would be to run the equation
with both ... and point to the fact - assuming that it is true - that
neither one
remains "significant" when the contribution of the other one is taken into
account.
This approach is probably a better one for comparing the two predictors,
than saying
that "one coefficient is larger".

The power of the comparison will be stronger when one predictor has
nothing
unique to contribute:  That is, if one variable is a much more reliable
predictor
of the outcome in question, then it /might/ still have something to add to
the other.

--
Rich Ulrich


Date: Tue, 14 Jun 2016 14:15:09 -0700
From:

yurischarp@

Subject: Comparing regression coefficients? Is it allowed?
To:

SPSSX-L@.UGA


Dear Everyone! :D

Simply put I have A, and B predicting Y (with a bunch of other variables
as
well).

A = leadership style 1 (called transformational leadership).
B = leadership style 2 (called servant leadership).
Y = turnover intention (whether you want to leave your job).

Running my first model (containing A predicting Y) I get the following
output (among other things) for Y as an outcome:
   coefficient=-.4162, se=.1642, t=-2,5342, p=.0126

Running my second model (containing B predicting Y) I get the following
output (among other things) for Y as an outcome:
    coefficient=-.3042, se=.1388, t=-2.19, p=.0304

So I'm wondering, can I say the coefficient is stronger negatively
related
to Y? I mean it's more significant...

How could I compare them?

Best,

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD



--
Jon K Peck
[hidden email]

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Andy W
In reply to this post by Ryan
Sorry Ryan, I wasn't able to find any examples (I do not doubt you posted them!) Mind hand-holding me a bit more - I've never grokked contrasts very well, and all the examples I can find are contrasts among a single factor variable.

So for my example would below test the equality of the A and B coefficients?

MIXED Y WITH A B
  /FIXED = A B
  /METHOD = ML
  /PRINT COVB
  /TEST = 'Equality of A-B Coefficients' ALL 0 1 -1.

(I don't have MIXED on my machine, so I can't check myself!)

-----------------------------------------------

GLM also has alittle different syntax for the TEST (the VS), so would this test for equality?

GLM Y WITH A B  /TEST = A VS B.

(Again do not have any of the advanced stats on any of my machines currently to test myself.)

-----------------------------------------------

I don't think this test is possible via EMMEANS (which would make it more general to GENLIN and GENLINMIXED) but would be grateful if anyone could prove me wrong!

Andy W
apwheele@gmail.com
http://andrewpwheeler.wordpress.com/
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Yuri
In reply to this post by Yuri
Thank you all very much for replying with so much detail! It has vastly increased my understanding on the issue. I love how this community even gives references to make clear what is meant.

To address some of the points made;
- I want to thank both Andy & Rich for pointing out that comparing these coefficients is flawed. I also really appreciate that you took the time to attach an article that further explains this!
- Mike: you are right! I confused the two questions! I did not add both predictors (A&B) in the same model due to multicollinearity between the variables (it's very high). I did however perform a factor analysis, and they seem to measure different constructs. Thank you very much for taking the time to respond.
- Jon: thank you very much for the detailed response! I think it's abit of a overkill though! haha

Also everyone else thanks alot for replying!

The issue has been resolved :)


Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Bruce Weaver
Administrator
In reply to this post by Andy W
Hi Andy.  Your /TEST sub-command for MIXED looks right to me.  But for GLM (or UNIANOVA), I think you need to use LMATRIX, like this:

GLM Y WITH A B
  /LMATRIX = 'Equality of A-B Coefficients' ALL 0 1 -1.


For GLM/UNIANOVA, "the TEST subcommand allows you to test a hypothesis term against a specified error term."

For example, I first used it when analyzing a two-factor ANOVA with A as a random factor and B a fixed factor.  The SPSS defaults were not giving the results that matched the textbook example.  So rather than declaring A as a random factor, I told SPSS both A and B were fixed, but then used /TEST to get a test of B (the fixed factor) using MS_AB as the error term.

Here is an excerpt from my old syntax file:

* Same data, but with problem difficulty as a random factor  .

UNIANOVA
  y  BY a b
  /RANDOM = a /* Problem difficulty is a random factor */
  /METHOD = SSTYPE(3)
  /INTERCEPT = INCLUDE
  /CRITERIA = ALPHA(.05)
  /DESIGN = a b a*b.

* Note that SPSS has used MS(AB) as the error term for both
* main effects.  Many textbooks suggest that MS(AB) is the
* appropriate error term for the FIXED factor, and MS(error)
* is the appropriate error term for the RANDOM factor.  

* For more on why SPSS has done things this way, read the following:
* http://www.angelfire.com/wv/bwhomedir/spss/SPSS_GLM_mixed_model.html .

* We can produce the desired F-tests by doing an analysis
         that treats both A and B as fixed (this will yield the
         appropriate F-test for the random factor), and adding
         a custom hypothesis test for the fixed factor, B .

UNIANOVA
  y  BY a b
  /METHOD = SSTYPE(3)
  /INTERCEPT = INCLUDE
  /CRITERIA = ALPHA(.05)
  /TEST = b vs a*b /* B a fixed factor; use MS(AB) as error term  */
  /DESIGN = a b a*b.

* --------------------------------------.

If you don't want to click on that angelfire link (my first website, no longer in use, liable to produce pop-ups), you can view this old comp.soft-sys.stat.spss post by Dave Nichols instead:

https://groups.google.com/forum/#!topicsearchin/comp.soft-sys.stat.spss/subject$3AExpected$20AND$20subject$3Amean$20AND$20subject$3Asquares$20AND$20subject$3Aand$20AND$20subject$3Aerror$20AND$20subject$3Aterms$20AND$20subject$3Ain$20AND$20subject$3AGLM/comp.soft-sys.stat.spss/RyMaVl2pAZY

HTH.



Andy W wrote
Sorry Ryan, I wasn't able to find any examples (I do not doubt you posted them!) Mind hand-holding me a bit more - I've never grokked contrasts very well, and all the examples I can find are contrasts among a single factor variable.

So for my example would below test the equality of the A and B coefficients?

MIXED Y WITH A B
  /FIXED = A B
  /METHOD = ML
  /PRINT COVB
  /TEST = 'Equality of A-B Coefficients' ALL 0 1 -1.

(I don't have MIXED on my machine, so I can't check myself!)

-----------------------------------------------

GLM also has alittle different syntax for the TEST (the VS), so would this test for equality?

GLM Y WITH A B  /TEST = A VS B.

(Again do not have any of the advanced stats on any of my machines currently to test myself.)

-----------------------------------------------

I don't think this test is possible via EMMEANS (which would make it more general to GENLIN and GENLINMIXED) but would be grateful if anyone could prove me wrong!
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Rich Ulrich
Perhaps GLM, or whatever, compares two coefficients that should be compared.

I am pretty sure that you do not get a comparison of the two univariate regression
coefficients that were in the original question.  Right?

--
Rich Ulrich

> Date: Wed, 15 Jun 2016 14:17:06 -0700

> From: [hidden email]
> Subject: Re: Comparing regression coefficients? Is it allowed?
> To: [hidden email]
>
> Hi Andy. Your /TEST sub-command for MIXED looks right to me. But for GLM
> (or UNIANOVA), I think you need to use LMATRIX, like this:
>
> GLM Y WITH A B
> /LMATRIX = 'Equality of A-B Coefficients' ALL 0 1 -1.
>
>
> For GLM/UNIANOVA, "the TEST subcommand allows you to test a hypothesis term
> against a specified error term."
>
> For example, I first used it when analyzing a two-factor ANOVA with A as a
> random factor and B a fixed factor. The SPSS defaults were not giving the
> results that matched the textbook example. So rather than declaring A as a
> random factor, I told SPSS both A and B were fixed, but then used /TEST to
> get a test of B (the fixed factor) using MS_AB as the error term.
>
> Here is an excerpt from my old syntax file:
>
> * Same data, but with problem difficulty as a random factor .
>
> UNIANOVA
> y BY a b
> /RANDOM = a /* Problem difficulty is a random factor */
> /METHOD = SSTYPE(3)
> /INTERCEPT = INCLUDE
> /CRITERIA = ALPHA(.05)
> /DESIGN = a b a*b.
>
> * Note that SPSS has used MS(AB) as the error term for both
> * main effects. Many textbooks suggest that MS(AB) is the
> * appropriate error term for the FIXED factor, and MS(error)
> * is the appropriate error term for the RANDOM factor.
>
> * For more on why SPSS has done things this way, read the following:
> * http://www.angelfire.com/wv/bwhomedir/spss/SPSS_GLM_mixed_model.html .
>
> * We can produce the desired F-tests by doing an analysis
> that treats both A and B as fixed (this will yield the
> appropriate F-test for the random factor), and adding
> a custom hypothesis test for the fixed factor, B .
>
> UNIANOVA
> y BY a b
> /METHOD = SSTYPE(3)
> /INTERCEPT = INCLUDE
> /CRITERIA = ALPHA(.05)
> /TEST = b vs a*b /* B a fixed factor; use MS(AB) as error term */
> /DESIGN = a b a*b.
>
> * --------------------------------------.
>
> If you don't want to click on that angelfire link (my first website, no
> longer in use, liable to produce pop-ups), you can view this old
> comp.soft-sys.stat.spss post by Dave Nichols instead:
>
> https://groups.google.com/forum/#!topicsearchin/comp.soft-sys.stat.spss/subject$3AExpected$20AND$20subject$3Amean$20AND$20subject$3Asquares$20AND$20subject$3Aand$20AND$20subject$3Aerror$20AND$20subject$3Aterms$20AND$20subject$3Ain$20AND$20subject$3AGLM/comp.soft-sys.stat.spss/RyMaVl2pAZY
>
> HTH.
>

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Bruce Weaver
Administrator
-----
--
Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

NOTE: My Hotmail account is not monitored regularly.
To send me an e-mail, please use the address shown above.

--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Comparing-regression-coefficients-Is-it-allowed-tp5732403p5732439.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Bruce Weaver
Administrator
In reply to this post by Rich Ulrich
Right.  Andy's MIXED command with TEST sub-command is comparing the coefficients for two explanatory variables in the same model.


Rich Ulrich wrote
Perhaps GLM, or whatever, compares two coefficients that should be compared.

I am pretty sure that you do not get a comparison of the two univariate regression
coefficients that were in the original question.  Right?

--
Rich Ulrich

> Date: Wed, 15 Jun 2016 14:17:06 -0700
> From: [hidden email]
> Subject: Re: Comparing regression coefficients? Is it allowed?
> To: [hidden email]
>
> Hi Andy.  Your /TEST sub-command for MIXED looks right to me.  But for GLM
> (or UNIANOVA), I think you need to use LMATRIX, like this:
>
> GLM Y WITH A B
>   /LMATRIX = 'Equality of A-B Coefficients' ALL 0 1 -1.
>
>
> For GLM/UNIANOVA, "the TEST subcommand allows you to test a hypothesis term
> against a specified error term."
>
> For example, I first used it when analyzing a two-factor ANOVA with A as a
> random factor and B a fixed factor.  The SPSS defaults were not giving the
> results that matched the textbook example.  So rather than declaring A as a
> random factor, I told SPSS both A and B were fixed, but then used /TEST to
> get a test of B (the fixed factor) using MS_AB as the error term.
>
> Here is an excerpt from my old syntax file:
>
> * Same data, but with problem difficulty as a random factor  .
>
> UNIANOVA
>   y  BY a b
>   /RANDOM = a /* Problem difficulty is a random factor */
>   /METHOD = SSTYPE(3)
>   /INTERCEPT = INCLUDE
>   /CRITERIA = ALPHA(.05)
>   /DESIGN = a b a*b.
>
> * Note that SPSS has used MS(AB) as the error term for both
> * main effects.  Many textbooks suggest that MS(AB) is the
> * appropriate error term for the FIXED factor, and MS(error)
> * is the appropriate error term for the RANDOM factor.  
>
> * For more on why SPSS has done things this way, read the following:
> * http://www.angelfire.com/wv/bwhomedir/spss/SPSS_GLM_mixed_model.html .
>
> * We can produce the desired F-tests by doing an analysis
> that treats both A and B as fixed (this will yield the
> appropriate F-test for the random factor), and adding
> a custom hypothesis test for the fixed factor, B .
>
> UNIANOVA
>   y  BY a b
>   /METHOD = SSTYPE(3)
>   /INTERCEPT = INCLUDE
>   /CRITERIA = ALPHA(.05)
>   /TEST = b vs a*b /* B a fixed factor; use MS(AB) as error term  */
>   /DESIGN = a b a*b.
>
> * --------------------------------------.
>
> If you don't want to click on that angelfire link (my first website, no
> longer in use, liable to produce pop-ups), you can view this old
> comp.soft-sys.stat.spss post by Dave Nichols instead:
>
> https://groups.google.com/forum/#!topicsearchin/comp.soft-sys.stat.spss/subject$3AExpected$20AND$20subject$3Amean$20AND$20subject$3Asquares$20AND$20subject$3Aand$20AND$20subject$3Aerror$20AND$20subject$3Aterms$20AND$20subject$3Ain$20AND$20subject$3AGLM/comp.soft-sys.stat.spss/RyMaVl2pAZY
>
> HTH.
>

     
=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Ryan
In reply to this post by Andy W
Andy,

Sorry for the delayed response but I've been extremely busy. Nevertheless, you asked a question which I will exploit to provide some clarity regarding the TEST subcommand.

Let's use the following steps to test the difference between the partial slopes of two variables ("x1" and "x2") via the TEST subcommand:

1. Estimate the expected value of y conditional upon x1 = <value> and x2 = <value>
2. Estimate the expected value of y conditional upon x1=<value MINUS 1> and x2 = <value>
3. Estimate the partial slope of x1 by subtracting the coefficients of the same effects from steps 1 and 2
4. Estimate the expected value of y conditional upon x1 = <value> and x2 = <value MINUS 1>
5. Estimate the partial slope of x2 by subtracting the coefficients of the same effects from steps 1 and 4
6. Estimate the difference of the partial slopes of x1 and x2 by subtracting the coefficients of the same effects from steps 3 and 5

Below is a simulation illustrating how to work up to the contrast of interest using the steps:

Ryan

p.s. we can simplify some of the TEST statements by eliminating the effects which have all zero coefficients, but I opted to not do so to make the illustration as clear as possible.
--


*Generate Data.
set seed 98765432.
new file.
input program.
     loop ID= 1 to 1000.
     compute x1 = rv.normal(0,1).
     compute x2 = rv.normal(0,1).
     compute error = rv.normal(0,1).
     compute y  = -1.5 + .10*x1 + .05*x2 + error.
     end case.
     end loop.
end file.
end input program.
execute.

REGRESSION
  /STATISTICS COEFF OUTS R ANOVA
  /DEPENDENT y
  /METHOD=ENTER x1 x2.

mixed y with x1 x2
/fixed=x1 x2
/print=solution
/test 'Step 1: yhat|x1=1,x2=1' intercept 1 x1 1 x2 1
/test 'Step 2: yhat|x1=0,x2=0' intercept 1 x1 0 x2 1
/test 'Step 3: x1 partial slope' intercept 0 x1 1 x2 0
/test 'Step 4: yhat|x1=0,x2=0' intercept 1 x1 1 x2 0
/test 'Step 5: x2 partial slope' intercept 0 x1 0 x2 1
/test 'Step 6: difference between x1 and x2 partial slopes' intercept 0 x1 1 x2 -1.


On Wed, Jun 15, 2016 at 1:29 PM, Andy W <[hidden email]> wrote:
Sorry Ryan, I wasn't able to find any examples (I do not doubt you posted
them!) Mind hand-holding me a bit more - I've never grokked contrasts very
well, and all the examples I can find are contrasts among a single factor
variable.

So for my example would below test the equality of the A and B coefficients?

MIXED Y WITH A B
  /FIXED = A B
  /METHOD = ML
  /PRINT COVB
  /TEST = 'Equality of A-B Coefficients' ALL 0 1 -1.

(I don't have MIXED on my machine, so I can't check myself!)

-----------------------------------------------

GLM also has alittle different syntax for the TEST (the VS), so would this
test for equality?

GLM Y WITH A B  /TEST = A VS B.

(Again do not have any of the advanced stats on any of my machines currently
to test myself.)

-----------------------------------------------

I don't think this test is possible via EMMEANS (which would make it more
general to GENLIN and GENLINMIXED) but would be grateful if anyone could
prove me wrong!





-----
Andy W
[hidden email]
http://andrewpwheeler.wordpress.com/
--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Comparing-regression-coefficients-Is-it-allowed-tp5732403p5732418.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Comparing regression coefficients? Is it allowed?

Ryan
Small correction to the labels in steps 2 and 4:

/test 'Step 2: yhat|x1=0,x2=1' intercept 1 x1 0 x2 1
/test 'Step 4: yhat|x1=1,x2=0' intercept 1 x1 1 x2 0

Apologies.

Ryan


On Fri, Jun 24, 2016 at 10:57 PM, Ryan Black <[hidden email]> wrote:
Andy,

Sorry for the delayed response but I've been extremely busy. Nevertheless, you asked a question which I will exploit to provide some clarity regarding the TEST subcommand.

Let's use the following steps to test the difference between the partial slopes of two variables ("x1" and "x2") via the TEST subcommand:

1. Estimate the expected value of y conditional upon x1 = <value> and x2 = <value>
2. Estimate the expected value of y conditional upon x1=<value MINUS 1> and x2 = <value>
3. Estimate the partial slope of x1 by subtracting the coefficients of the same effects from steps 1 and 2
4. Estimate the expected value of y conditional upon x1 = <value> and x2 = <value MINUS 1>
5. Estimate the partial slope of x2 by subtracting the coefficients of the same effects from steps 1 and 4
6. Estimate the difference of the partial slopes of x1 and x2 by subtracting the coefficients of the same effects from steps 3 and 5

Below is a simulation illustrating how to work up to the contrast of interest using the steps:

Ryan

p.s. we can simplify some of the TEST statements by eliminating the effects which have all zero coefficients, but I opted to not do so to make the illustration as clear as possible.
--


*Generate Data.
set seed 98765432.
new file.
input program.
     loop ID= 1 to 1000.
     compute x1 = rv.normal(0,1).
     compute x2 = rv.normal(0,1).
     compute error = rv.normal(0,1).
     compute y  = -1.5 + .10*x1 + .05*x2 + error.
     end case.
     end loop.
end file.
end input program.
execute.

REGRESSION
  /STATISTICS COEFF OUTS R ANOVA
  /DEPENDENT y
  /METHOD=ENTER x1 x2.

mixed y with x1 x2
/fixed=x1 x2
/print=solution
/test 'Step 1: yhat|x1=1,x2=1' intercept 1 x1 1 x2 1
/test 'Step 2: yhat|x1=0,x2=0' intercept 1 x1 0 x2 1
/test 'Step 3: x1 partial slope' intercept 0 x1 1 x2 0
/test 'Step 4: yhat|x1=0,x2=0' intercept 1 x1 1 x2 0
/test 'Step 5: x2 partial slope' intercept 0 x1 0 x2 1
/test 'Step 6: difference between x1 and x2 partial slopes' intercept 0 x1 1 x2 -1.


On Wed, Jun 15, 2016 at 1:29 PM, Andy W <[hidden email]> wrote:
Sorry Ryan, I wasn't able to find any examples (I do not doubt you posted
them!) Mind hand-holding me a bit more - I've never grokked contrasts very
well, and all the examples I can find are contrasts among a single factor
variable.

So for my example would below test the equality of the A and B coefficients?

MIXED Y WITH A B
  /FIXED = A B
  /METHOD = ML
  /PRINT COVB
  /TEST = 'Equality of A-B Coefficients' ALL 0 1 -1.

(I don't have MIXED on my machine, so I can't check myself!)

-----------------------------------------------

GLM also has alittle different syntax for the TEST (the VS), so would this
test for equality?

GLM Y WITH A B  /TEST = A VS B.

(Again do not have any of the advanced stats on any of my machines currently
to test myself.)

-----------------------------------------------

I don't think this test is possible via EMMEANS (which would make it more
general to GENLIN and GENLINMIXED) but would be grateful if anyone could
prove me wrong!





-----
Andy W
[hidden email]
http://andrewpwheeler.wordpress.com/
--
View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Comparing-regression-coefficients-Is-it-allowed-tp5732403p5732418.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD


===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD