test the difference between two t-values

classic Classic list List threaded Threaded
15 messages Options
Reply | Threaded
Open this post in threaded view
|

test the difference between two t-values

gilles-15
Is anyone aware of any reference (SPSS syntax would be a plus) that
demonstrates the manner in which the difference between two t-values  can be
tested for statistical significance?


Thanks,

Gilles
Reply | Threaded
Open this post in threaded view
|

Re: test the difference between two t-values

Salbod
Hi Gilles,
        I have not heard of a method to compare t ratios, but, there are
methods to compare effects sizes. For example, the two ts of interest can be
converted to point-biserial correlations(pearson rs); these correlations can
then be compared.
        I hope this is helpful.

--Steve

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
gilles
Sent: Sunday, March 25, 2007 2:45 AM
To: [hidden email]
Subject: test the difference between two t-values

Is anyone aware of any reference (SPSS syntax would be a plus) that
demonstrates the manner in which the difference between two t-values  can be
tested for statistical significance?


Thanks,

Gilles
Reply | Threaded
Open this post in threaded view
|

Re: test the difference between two t-values

Stevan Nielsen
Steve,

I know the formulae for making comparisons between correlation coefficients, but could you share a recent reference or two where such comparisons have been reported in refereed journal articles?  I'm interested in contexts in which researchers compare effect sizes.

Best wishes,

Lars Nielsen

Stevan Lars Nielsen, Ph.D.
Clinical Professor
Clinical Psychologist
2518 WSC, BYU
Provo, UT 84602



-----Original Message-----
From: SPSSX(r) Discussion on behalf of Stephen Salbod
Sent: Sun 3/25/2007 10:15 AM
To: [hidden email]
Subject:      Re: test the difference between two t-values

Hi Gilles,
        I have not heard of a method to compare t ratios, but, there are
methods to compare effects sizes. For example, the two ts of interest can be
converted to point-biserial correlations(pearson rs); these correlations can
then be compared.
        I hope this is helpful.

--Steve

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
gilles
Sent: Sunday, March 25, 2007 2:45 AM
To: [hidden email]
Subject: test the difference between two t-values

Is anyone aware of any reference (SPSS syntax would be a plus) that
demonstrates the manner in which the difference between two t-values  can be
tested for statistical significance?


Thanks,

Gilles
Reply | Threaded
Open this post in threaded view
|

Re: test the difference between two t-values

Alexander J. Shackman-2
i believe that gray describes a rosenthal/rosnow technique for doing what
you want (convert Fs to effect sizes and contrast 2 ESs) in this empirical
report:

http://www.yale.edu/scan/Gray01.pdf

On 3/25/07, Stevan Nielsen <[hidden email]> wrote:

>
> Steve,
>
> I know the formulae for making comparisons between correlation
> coefficients, but could you share a recent reference or two where such
> comparisons have been reported in refereed journal articles?  I'm interested
> in contexts in which researchers compare effect sizes.
>
> Best wishes,
>
> Lars Nielsen
>
> Stevan Lars Nielsen, Ph.D.
> Clinical Professor
> Clinical Psychologist
> 2518 WSC, BYU
> Provo, UT 84602
>
>
>
> -----Original Message-----
> From: SPSSX(r) Discussion on behalf of Stephen Salbod
> Sent: Sun 3/25/2007 10:15 AM
> To: [hidden email]
> Subject:      Re: test the difference between two t-values
>
> Hi Gilles,
>         I have not heard of a method to compare t ratios, but, there are
> methods to compare effects sizes. For example, the two ts of interest can
> be
> converted to point-biserial correlations(pearson rs); these correlations
> can
> then be compared.
>         I hope this is helpful.
>
> --Steve
>
> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
> gilles
> Sent: Sunday, March 25, 2007 2:45 AM
> To: [hidden email]
> Subject: test the difference between two t-values
>
> Is anyone aware of any reference (SPSS syntax would be a plus) that
> demonstrates the manner in which the difference between two t-values  can
> be
> tested for statistical significance?
>
>
> Thanks,
>
> Gilles
>



--
Alexander J. Shackman
Laboratory for Affective Neuroscience
Waisman Laboratory for Brain Imaging & Behavior
University of Wisconsin-Madison
1202 West Johnson Street
Madison, Wisconsin 53706

Telephone: +1 (608) 358-5025
FAX: +1 (608) 265-2875
EMAIL: [hidden email]
http://psyphz.psych.wisc.edu/~shackman
Reply | Threaded
Open this post in threaded view
|

strategy for ordinal x continuous interaction

Dale Glaser
I am on a project where I will be testing an interaction between a continous level variable and what is in practice a continuous level moderator (hours commute); however, 'hours commute' was captured as an ordered categorical  variable (1 = < 5 hrs; 2 = 5-10; 3 = 11-15...........to 8 = > 40 hours); thus, to capture the interaction in a moderated multiple regression (MMR) context, I usually would heed the advice of Cohen et al (2003) and center the predictors so as to decrease collinearity of the lower order terms.  However, with an ordinal variable, even if the scaling is deemed to be arbitrary, it seems problematic to mean center such a variables as well as create the multiplicative term.  So I was curious what strategy any of you employ when creating an ordinal x continuous level interaction term in a MMR context.  I know that amongst the strategies used in structural equation modeling (SEM) for models with moderators, one does incorporate creating multiplicative terms
 between the manifest indicators for the latent constructs (e.g, using LISREL notation,  LX21 x LX22 for the 2nd item loading on the first two latent constructs) and often those are ordinal, self-report items, and centering may or may not be executed. So, off the top of my head here is my proposed options, and I would be most appreciative to solicit any of your opinions:

  (1) assume that this is much ballyhoo about nothing and create the continuous x ordinal multiplicative term with impunity (and centering is fine under the assumption of arbitrariness of scaling for the ordinal variable), though it would seem caution is in order for interpreting the unstandardized partial regression coefficient

  (2) since 64% of the sample from this project commute > 40 hours, create a dummy coded binary variable coding for '< 40 hours' and '> 40 hours', but lose the rank-ordering nature of the variable and attendant information (leading to truncation of variation).

  (3) and less appealing,create 7 dummy coded vectors (to capture the 8 levels of 'hours') and create a potentially over-parameterized model with the continous x 7 dummy coded vectors, and as with option #2 lose the theoretical continuity of the moderator variable

  (4) I was trying to think if there was strategy akin to polyserial/polychoric correlation where I could create some type of thresold parameter for the ordinal variable, but I'm not sure of the advisability of such an approach.

  Any feedback would be most appreciate...thank you....

  Dale Glaser


Dale Glaser, Ph.D.
Principal--Glaser Consulting
Lecturer/Adjunct Faculty--SDSU/USD/AIU
President-Elect, San Diego Chapter of
American Statistical Association
3115 4th Avenue
San Diego, CA 92103
phone: 619-220-0602
fax: 619-220-0412
email: [hidden email]
website: www.glaserconsult.com
Reply | Threaded
Open this post in threaded view
|

RE : strategy for ordinal x continuous interaction

F. Gabarrot
Hello,

Dale Glaser wrote
  (3) and less appealing,create 7 dummy coded vectors (to capture the 8 levels of 'hours') and create a potentially over-parameterized model with the continous x 7 dummy coded vectors, and as with option #2 lose the theoretical continuity of the moderator variable
I think you should create 7 vectors, using polynomial coding (linear, quadratic, ..., nth order) rather than dummy coding. Such a coding may account for theoretical continuity of the moderator variable. Another advantage to use polynomial coding is that this coding is centered, and will thus decrease collinearity between lower order terms.

I hope this helps.
Best wishes,

Fabrice
Reply | Threaded
Open this post in threaded view
|

Re: test the difference between two t-values

Art Kendall
In reply to this post by gilles-15
Do you have the raw data?  Do you mean independent or repeated measures
t-test of mean differences in two studies?
If  you have 2 studies both of which have the same IV and DV, you could
do a STUDY (2) by Original_IV (2) ANOVA.
The interaction term would test whether the difference between the
levels of the IV was statistically different between the two studies.

Art Kendall

gilles wrote:

> Is anyone aware of any reference (SPSS syntax would be a plus) that
> demonstrates the manner in which the difference between two t-values  can be
> tested for statistical significance?
>
>
> Thanks,
>
> Gilles
>
>
>
Art Kendall
Social Research Consultants
Reply | Threaded
Open this post in threaded view
|

Re: strategy for ordinal x continuous interaction

Art Kendall
In reply to this post by Dale Glaser
Did you look at CATREG -- categorical regression? I have the impression
that it would test the fit of different levels of measurement.

Just curious. Is your commuting time hours per month?  Or is this some
very unusual pop?

Art Kendall
Social Research Consultants

Dale Glaser wrote:

> I am on a project where I will be testing an interaction between a continous level variable and what is in practice a continuous level moderator (hours commute); however, 'hours commute' was captured as an ordered categorical  variable (1 = < 5 hrs; 2 = 5-10; 3 = 11-15...........to 8 = > 40 hours); thus, to capture the interaction in a moderated multiple regression (MMR) context, I usually would heed the advice of Cohen et al (2003) and center the predictors so as to decrease collinearity of the lower order terms.  However, with an ordinal variable, even if the scaling is deemed to be arbitrary, it seems problematic to mean center such a variables as well as create the multiplicative term.  So I was curious what strategy any of you employ when creating an ordinal x continuous level interaction term in a MMR context.  I know that amongst the strategies used in structural equation modeling (SEM) for models with moderators, one does incorporate creating multiplicative terms
>  between the manifest indicators for the latent constructs (e.g, using LISREL notation,  LX21 x LX22 for the 2nd item loading on the first two latent constructs) and often those are ordinal, self-report items, and centering may or may not be executed. So, off the top of my head here is my proposed options, and I would be most appreciative to solicit any of your opinions:
>
>   (1) assume that this is much ballyhoo about nothing and create the continuous x ordinal multiplicative term with impunity (and centering is fine under the assumption of arbitrariness of scaling for the ordinal variable), though it would seem caution is in order for interpreting the unstandardized partial regression coefficient
>
>   (2) since 64% of the sample from this project commute > 40 hours, create a dummy coded binary variable coding for '< 40 hours' and '> 40 hours', but lose the rank-ordering nature of the variable and attendant information (leading to truncation of variation).
>
>   (3) and less appealing,create 7 dummy coded vectors (to capture the 8 levels of 'hours') and create a potentially over-parameterized model with the continous x 7 dummy coded vectors, and as with option #2 lose the theoretical continuity of the moderator variable
>
>   (4) I was trying to think if there was strategy akin to polyserial/polychoric correlation where I could create some type of thresold parameter for the ordinal variable, but I'm not sure of the advisability of such an approach.
>
>   Any feedback would be most appreciate...thank you....
>
>   Dale Glaser
>
>
> Dale Glaser, Ph.D.
> Principal--Glaser Consulting
> Lecturer/Adjunct Faculty--SDSU/USD/AIU
> President-Elect, San Diego Chapter of
> American Statistical Association
> 3115 4th Avenue
> San Diego, CA 92103
> phone: 619-220-0602
> fax: 619-220-0412
> email: [hidden email]
> website: www.glaserconsult.com
>
>
>
Art Kendall
Social Research Consultants
Reply | Threaded
Open this post in threaded view
|

Re: RE : strategy for ordinal x continuous interaction

Dale Glaser
In reply to this post by F. Gabarrot
Fabrice, just to check my understanding....I am persuing ch. 22 in Draper and Smith (1998) on Orthogonal Polynomials...so assuming equal spacing of the ordinal variable, and given 8 ordered categories, I could create the terms referring to a table of Coefficients of Orthogonal Polynomials (p. 466) such as

   For Linear term:
  -7  -5  -3  -1  1  3  5  7

  For  quadratic term:
  7 1 -3 -5 - 5 -3 1 7, etc. up to nth term

  unfortunately the tables in Draper and Smith (when I refer to n = 8) only go up to the 6th term.......

  suggestions?....dale





"F. Gabarrot" <[hidden email]> wrote:
  Hello,


Dale Glaser wrote:
>
> (3) and less appealing,create 7 dummy coded vectors (to capture the 8
> levels of 'hours') and create a potentially over-parameterized model with
> the continous x 7 dummy coded vectors, and as with option #2 lose the
> theoretical continuity of the moderator variable
>

I think you should create 7 vectors, using polynomial coding (linear,
quadratic, ..., nth order) rather than dummy coding. Such a coding may
account for theoretical continuity of the moderator variable. Another
advantage to use polynomial coding is that this coding is centered, and will
thus decrease collinearity between lower order terms.

I hope this helps.
Best wishes,

Fabrice
--
View this message in context: http://www.nabble.com/test-the-difference-between-two-t-values-tf3461449.html#a9667819
Sent from the SPSSX Discussion mailing list archive at Nabble.com.



Dale Glaser, Ph.D.
Principal--Glaser Consulting
Lecturer/Adjunct Faculty--SDSU/USD/AIU
President-Elect, San Diego Chapter of
American Statistical Association
3115 4th Avenue
San Diego, CA 92103
phone: 619-220-0602
fax: 619-220-0412
email: [hidden email]
website: www.glaserconsult.com
Reply | Threaded
Open this post in threaded view
|

Re: test the difference between two t-values

gilles-15
In reply to this post by Art Kendall
Thanks for the responses, thus far.

Art: your response is getting at what I would like to test, but does not
address the purpose of what I want to do. That is, some (many?) people often
test simple main effects after they get a statistically significant
interaction effect, hoping or expecting to observe that one simple effect
t-test will be statistically significant and the other simple effect t-test
will not be statistically significant (for the 2 by 2 case). However, tests
of simple main effects do not in fact test for an interaction, as it is
wholly possible to observe both simple main effect t-tests as statistically
significant in the presence of a statistically significant interaction. The
proper analysis, of course, is to perform a 'contrast-contrast interaction'
(how often do you see one of those in the literature?).

For the purposes of educating my students, I would like to demonstrate that
while tests of simple main effects are probably not very useful, the
difference between two t-values derived from a simple main effects analysis
is effectively equivalent to 'contrast-contrast interaction' testing.

From a more applied perspective, I suppose the demonstration of the validity
of testing the difference between two t-values in this context may have some
value, as many applied researchers probably have a difficult time with the
application of 'contrast-contrast interactions'.

Gilles



-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Art
Kendall
Sent: Monday, 26 March 2007 7:56 PM
To: [hidden email]
Subject: Re: test the difference between two t-values

Do you have the raw data?  Do you mean independent or repeated measures
t-test of mean differences in two studies?
If  you have 2 studies both of which have the same IV and DV, you could
do a STUDY (2) by Original_IV (2) ANOVA.
The interaction term would test whether the difference between the
levels of the IV was statistically different between the two studies.

Art Kendall

gilles wrote:
> Is anyone aware of any reference (SPSS syntax would be a plus) that
> demonstrates the manner in which the difference between two t-values  can
be
> tested for statistical significance?
>
>
> Thanks,
>
> Gilles
>
>
>
Reply | Threaded
Open this post in threaded view
|

Re: test the difference between two t-values

Art Kendall-2
I have often found that graphing the results helps convey the basic idea
of interaction.
The DV goes on the vertical axis.  The IV on the horizontal axis.  There
would be a line segment for each study.

I would draw the line segment for one study and show how the vertical
distance between the two points is tested by one contrast (t-test,
difference).
The same for the second study.

Then using both ordinal and disordinal interactions show how the the
difference of differences (contrast of contrasts) produces lines that
are non parallel.
The interaction term tests how consistent  the apparent non-parallelism
is with the degree of non-parallelism produced by sampling (random
noise) when the pop line segments were parallel

Since the t-test, is a oneway anova with 1 contrast, why not use it as a
step to introduce anova more broadly? A minimal 2 way anova.
Explain the t-test vs t-test as an example of the more complex designs
that ANOVA is used for.

Art Kendall
Social Research Consultants




gilles wrote:

> Thanks for the responses, thus far.
>
> Art: your response is getting at what I would like to test, but does not
> address the purpose of what I want to do. That is, some (many?) people often
> test simple main effects after they get a statistically significant
> interaction effect, hoping or expecting to observe that one simple effect
> t-test will be statistically significant and the other simple effect t-test
> will not be statistically significant (for the 2 by 2 case). However, tests
> of simple main effects do not in fact test for an interaction, as it is
> wholly possible to observe both simple main effect t-tests as statistically
> significant in the presence of a statistically significant interaction. The
> proper analysis, of course, is to perform a 'contrast-contrast interaction'
> (how often do you see one of those in the literature?).
>
> For the purposes of educating my students, I would like to demonstrate that
> while tests of simple main effects are probably not very useful, the
> difference between two t-values derived from a simple main effects analysis
> is effectively equivalent to 'contrast-contrast interaction' testing.
>
> >From a more applied perspective, I suppose the demonstration of the validity
> of testing the difference between two t-values in this context may have some
> value, as many applied researchers probably have a difficult time with the
> application of 'contrast-contrast interactions'.
>
> Gilles
>
>
>
> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Art
> Kendall
> Sent: Monday, 26 March 2007 7:56 PM
> To: [hidden email]
> Subject: Re: test the difference between two t-values
>
> Do you have the raw data?  Do you mean independent or repeated measures
> t-test of mean differences in two studies?
> If  you have 2 studies both of which have the same IV and DV, you could
> do a STUDY (2) by Original_IV (2) ANOVA.
> The interaction term would test whether the difference between the
> levels of the IV was statistically different between the two studies.
>
> Art Kendall
>
> gilles wrote:
>
>> Is anyone aware of any reference (SPSS syntax would be a plus) that
>> demonstrates the manner in which the difference between two t-values  can
>>
> be
>
>> tested for statistical significance?
>>
>>
>> Thanks,
>>
>> Gilles
>>
>>
>>
>>
>
>
>
Reply | Threaded
Open this post in threaded view
|

Re: test the difference between two t-values

Richard Ristow
In reply to this post by gilles-15
Is it OK to weigh in with some questions, here? I've been wondering
through this exchange about what you make of the difference between two
t-values; that is, what's the null hypothesis against which you're
testing.

At 08:52 PM 3/26/2007, gilles wrote:

>People often test simple main effects after they get a statistically
>significant interaction effect, hoping or expecting to observe that
>one simple effect t-test will be statistically significant and the
>other simple effect t-test will not be statistically significant (for
>the 2 by 2 case). However, tests of simple main effects do not in fact
>test for an interaction, as it is wholly possible to observe both
>simple main effect t-tests as statistically significant in the
>presence of a statistically significant interaction.

Of course, in at least two plausible cases:

a. Factor 1 affects the outcome, but in different directions depending
on the presence or absence of factor 2.

b. Factor 1 affects the outcome in the same direction regardless of the
presence of factor 2, but the effect is larger in the presence (or in
the absence) of factor 2.

I'm surprised it's taken as an 'ordinary' case that presence of a
factor 1x2 interaction means the factor 1 effect is mainly confined to
one of the two subgroups defined by factor 2.

>The proper analysis, of course, is to perform a 'contrast-contrast
>interaction' (how often do you see one of those in the literature?).

Here I get lost. Why is not the presence of a statistically significant
interaction, itself, the test you're looking for?

>For the purposes of educating my students, I would like to demonstrate
>that while tests of simple main effects are probably not very useful,
>the difference between two t-values derived from a simple main effects
>analysis is effectively equivalent to 'contrast-contrast interaction'
>testing.

And here's where I get lost. How is it that testing difference of the t
statistics, essentially a difference in *measurability* of factor-1
effects between the groups, is more useful than the F-statistic (or
t-statistic) for the interaction, which is a test of a difference of
the factor-1 effects themselves?

-Inquiring,
  Richard
Reply | Threaded
Open this post in threaded view
|

Re: RE : strategy for ordinal x continuous interaction

F. Gabarrot
In reply to this post by Dale Glaser

Dale Glaser wrote
Fabrice, just to check my understanding....I am persuing ch. 22 in Draper and Smith (1998) on Orthogonal Polynomials...so assuming equal spacing of the ordinal variable, and given 8 ordered categories, I could create the terms referring to a table of Coefficients of Orthogonal Polynomials (p. 466) such as

   For Linear term:
  -7  -5  -3  -1  1  3  5  7

  For  quadratic term:
  7 1 -3 -5 - 5 -3 1 7, etc. up to nth term

  unfortunately the tables in Draper and Smith (when I refer to n = 8) only go up to the 6th term.......

  suggestions?....dale
I didn't find any table in the internet describing the 7 orthogonal polynomials. However I know a web site that allows you to compute orthogonal contrast codes if you already know some of them.

You should try this website: http://www.bolderstats.com/orthogCodes/
Reply | Threaded
Open this post in threaded view
|

Re: RE : strategy for ordinal x continuous interaction

F. Gabarrot
Hello,

I have found a reference in which you will probably find a complete table of orthogonal contrasts.
Kleinman, D., Kupper, L. and Muller, K. (1988). Applied Regression Analysis and Other Multivariable Methods. California: Duxbury Press.

However, I couldn't check by myself, so I hope my informations are right.

Have a nice day.


F. Gabarrot wrote
Dale Glaser wrote
Fabrice, just to check my understanding....I am persuing ch. 22 in Draper and Smith (1998) on Orthogonal Polynomials...so assuming equal spacing of the ordinal variable, and given 8 ordered categories, I could create the terms referring to a table of Coefficients of Orthogonal Polynomials (p. 466) such as

   For Linear term:
  -7  -5  -3  -1  1  3  5  7

  For  quadratic term:
  7 1 -3 -5 - 5 -3 1 7, etc. up to nth term

  unfortunately the tables in Draper and Smith (when I refer to n = 8) only go up to the 6th term.......

  suggestions?....dale
I didn't find any table in the internet describing the 7 orthogonal polynomials. However I know a web site that allows you to compute orthogonal contrast codes if you already know some of them.

You should try this website: http://www.bolderstats.com/orthogCodes/
Reply | Threaded
Open this post in threaded view
|

Re: RE : strategy for ordinal x continuous interaction

Dale Glaser
Thank you Fabrice......and yesterday as another option I was pondering the strategy of using CATREG (which Art Kendall recommended) and since it does not create the multiplicative term as logistic regression does, akin to OLS where one creates the multiplicative term after centering the continuous level predictors, I was thinking of using the Ordinal Spline function in CATREG for the ordinal variable and then saving the discretized ordinal variable................following that I would create the continuous x ordinal variable interaction.................does that sound reasonable or completely haywire?!!........dale

"F. Gabarrot" <[hidden email]> wrote:  Hello,

I have found a reference in which you will probably find a complete table of
orthogonal contrasts.
Kleinman, D., Kupper, L. and Muller, K. (1988). Applied Regression Analysis
and Other Multivariable Methods. California: Duxbury Press.

However, I couldn't check by myself, so I hope my informations are right.

Have a nice day.



F. Gabarrot wrote:

>
>
>
> Dale Glaser wrote:
>>
>> Fabrice, just to check my understanding....I am persuing ch. 22 in Draper
>> and Smith (1998) on Orthogonal Polynomials...so assuming equal spacing of
>> the ordinal variable, and given 8 ordered categories, I could create the
>> terms referring to a table of Coefficients of Orthogonal Polynomials (p.
>> 466) such as
>>
>> For Linear term:
>> -7 -5 -3 -1 1 3 5 7
>>
>> For quadratic term:
>> 7 1 -3 -5 - 5 -3 1 7, etc. up to nth term
>>
>> unfortunately the tables in Draper and Smith (when I refer to n = 8)
>> only go up to the 6th term.......
>>
>> suggestions?....dale
>>
>
> I didn't find any table in the internet describing the 7 orthogonal
> polynomials. However I know a web site that allows you to compute
> orthogonal contrast codes if you already know some of them.
>
> You should try this website: http://www.bolderstats.com/orthogCodes/
>
>

--
View this message in context: http://www.nabble.com/test-the-difference-between-two-t-values-tf3461449.html#a9747537
Sent from the SPSSX Discussion mailing list archive at Nabble.com.



Dale Glaser, Ph.D.
Principal--Glaser Consulting
Lecturer/Adjunct Faculty--SDSU/USD/AIU
President-Elect, San Diego Chapter of
American Statistical Association
3115 4th Avenue
San Diego, CA 92103
phone: 619-220-0602
fax: 619-220-0412
email: [hidden email]
website: www.glaserconsult.com