Anova SS1 vSS3 (using v 17.0)

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

Anova SS1 vSS3 (using v 17.0)

Allan Reese (Cefas)
I have a completely balanced dataset and fit the main effects only using
default (type 3) sums of squares. It reports 0df and 0ss for a factor
with two levels. Switching to type 1 sums of squares, it gives 1df and
a sensible sum of squares. Is there a simple reason why sstype(3)
should be so ornery?

On a point of terminology, are SPSS hierarchical sums of squares
identical to SAS sequential ss?

Data:
Row id aliquot assay rep kt lkt (the response)
151. 26 A 1 1 44.550 1.648848
152. 26 A 1 2 39.230 1.593615
153. 26 A 1 3 27.186 1.434345
154. 26 A 2 4 37.453 1.573482
155. 26 A 2 5 49.918 1.698259
156. 26 A 2 6 38.341 1.583661
169. 29 B 1 1 13.450 1.128722
170. 29 B 1 2 6.879 .8375258
171. 29 B 1 3 5.764 .7607239
172. 29 B 2 4 13.116 1.117787
173. 29 B 2 5 11.543 1.062301
174. 29 B 2 6 11.044 1.043127
181. 31 C 1 1 11.400 1.056905
182. 31 C 1 2 6.657 .8233027
183. 31 C 1 3 6.375 .8044802
184. 31 C 2 4 10.415 1.017641
185. 31 C 2 5 11.371 1.055816
186. 31 C 2 6 10.770 1.032208
199. 34 D 1 1 10.161 1.006936
200. 34 D 1 2 5.089 .7066036
201. 34 D 1 3 6.668 .8239956
202. 34 D 2 4 10.955 1.039603
203. 34 D 2 5 12.827 1.108116
204. 34 D 2 6 9.793 .9909264
211. 36 E 1 1 17.000 1.230449
212. 36 E 1 2 10.942 1.039088
213. 36 E 1 3 10.096 1.004149
214. 36 E 2 4 16.018 1.204596
215. 36 E 2 5 13.289 1.123484
216. 36 E 2 6 16.138 1.207857


UNIANOVA lkt BY aliquot method replicate
/METHOD=SSTYPE(1)
/INTERCEPT=INCLUDE
/EMMEANS=TABLES(method)
/CRITERIA=ALPHA(.01)
/DESIGN=method aliquot replicate.

Univariate Analysis of Variance

Notes
|-------------------------------|---------------|
|Output Created |07-Mar-2011 15:|
| |25:24 |
|-------------------------------|---------------|
|Comments | |
|---------------|---------------|---------------|
|Input |Active Dataset |DataSet0 |
| |---------------|---------------|
| |Filter |<none> |
| |---------------|---------------|
| |Weight |<none> |
| |---------------|---------------|
| |Split File |<none> |
| |---------------|---------------|
| |N of Rows in |30 |
| |Working Data | |
| |File | |
|---------------|---------------|---------------|
|Missing Value |Definition of |User-defined |
|Handling |Missing |missing values |
| | |are treated as |
| | |missing. |
| |---------------|---------------|
| |Cases Used |Statistics are |
| | |based on all |
| | |cases with |
| | |valid data for |
| | |all variables |
| | |in the model. |
|---------------|---------------|---------------|
|Syntax |UNIANOVA lkt |
| |BY aliquot |
| |method |
| |replicate |
| |/METHOD=SSTYPE |
| |(1) |
| |/INTERCEPT=INCL|
| |UDE |
| |/EMMEANS=TABLES|
| |(method) |
| | |
| |/CRITERIA=ALPHA|
| |(.01) |
| | |
| |/DESIGN=method |
|---------------|---------------|---------------|
|Resources |Processor Time |0:00:00.016 |
| |---------------|---------------|
| |Elapsed Time |0:00:00.031 |
|---------------|---------------|---------------|



[DataSet0]


Between-Subjects Factors
|--------------|--|
| |N |
|---------|----|--|
|aliquot |A |6 |
| |----|--|
| |B |6 |
| |----|--|
| |C |6 |
| |----|--|
| |D |6 |
| |----|--|
| |E |6 |
|---------|----|--|
|method |1.00|15|
| |----|--|
| |2.00|15|
|---------|----|--|
|replicate|1.00|5 |
| |----|--|
| |2.00|5 |
| |----|--|
| |3.00|5 |
| |----|--|
| |4.00|5 |
| |----|--|
| |5.00|5 |
| |----|--|
| |6.00|5 |
|---------|----|--|



Tests of Between-Subjects Effects
Dependent Variable:lkt
|---------------|---------------|--|-----------|---------|----|
|Source |Type I Sum of |df|Mean Square|F |Sig.|
| |Squares | | | | |
|---------------|---------------|--|-----------|---------|----|
|Corrected Model|2.056a |9 |.228 |61.462 |.000|
|---------------|---------------|--|-----------|---------|----|
|Intercept |37.988 |1 |37.988 |10219.052|.000|
|---------------|---------------|--|-----------|---------|----|
|method |.128 |1 |.128 |34.418 |.000|
|---------------|---------------|--|-----------|---------|----|
|aliquot |1.743 |4 |.436 |117.219 |.000|
|---------------|---------------|--|-----------|---------|----|
|replicate |.185 |4 |.046 |12.466 |.000|
|---------------|---------------|--|-----------|---------|----|
|Error |.074 |20|.004 | | |
|---------------|---------------|--|-----------|---------|----|
|Total |40.119 |30| | | |
|---------------|---------------|--|-----------|---------|----|
|Corrected Total|2.131 |29| | | |
|---------------|---------------|--|-----------|---------|----|
a. R Squared = .965 (Adjusted R Squared = .949)

[method+aliquot+replicate ss sum to model ss]


Estimated Marginal Means

method
Dependent Variable:lkt
|------|----|----------|---------------------------|
|method|Mean|Std. Error|99% Confidence Interval |
| | | | |
| | | |---------------|-----------|
| | | |Lower Bound |Upper Bound|
|------|----|----------|---------------|-----------|
|1.00 |.a |. |. |. |
|------|----|----------|---------------|-----------|
|2.00 |.a |. |. |. |
|------|----|----------|---------------|-----------|
a. This modified population marginal mean is not estimable.



UNIANOVA lkt BY aliquot method replicate
/METHOD=SSTYPE(3)
/INTERCEPT=INCLUDE
/EMMEANS=TABLES(method)
/CRITERIA=ALPHA(.01)
/DESIGN=method aliquot replicate.




Tests of Between-Subjects Effects
Dependent Variable:lkt
|---------------|---------------|--|-----------|---------|----|
|Source |Type III Sum |df|Mean Square|F |Sig.|
| |of Squares | | | | |
|---------------|---------------|--|-----------|---------|----|
|Corrected Model|2.056a |9 |.228 |61.462 |.000|
|---------------|---------------|--|-----------|---------|----|
|Intercept |37.988 |1 |37.988 |10219.052|.000|
|---------------|---------------|--|-----------|---------|----|
|method |.000 |0 |. |. |. |
|---------------|---------------|--|-----------|---------|----|
|aliquot |1.743 |4 |.436 |117.219 |.000|
|---------------|---------------|--|-----------|---------|----|
|replicate |.185 |4 |.046 |12.466 |.000|
|---------------|---------------|--|-----------|---------|----|
|Error |.074 |20|.004 | | |
|---------------|---------------|--|-----------|---------|----|
|Total |40.119 |30| | | |
|---------------|---------------|--|-----------|---------|----|
|Corrected Total|2.131 |29| | | |
|---------------|---------------|--|-----------|---------|----|
a. R Squared = .965 (Adjusted R Squared = .949)



Estimated Marginal Means

method
Dependent Variable:lkt
|------|----|----------|---------------------------|
|method|Mean|Std. Error|99% Confidence Interval |
| | | | |
| | | |---------------|-----------|
| | | |Lower Bound |Upper Bound|
|------|----|----------|---------------|-----------|
|1.00 |.a |. |. |. |
|------|----|----------|---------------|-----------|
|2.00 |.a |. |. |. |
|------|----|----------|---------------|-----------|
a. This modified population marginal mean is not estimable.

[model ss does not equal sum of parts]


Thanks for any comments
Allan





R Allan Reese
Senior statistician, Cefas
The Nothe, Weymouth DT4 8UB

Tel: +44 (0)1305 206614 -direct
Fax: +44 (0)1305 206601

www.cefas.co.uk


***********************************************************************************

This email and any attachments are intended for the named recipient only. Its unauthorised use, distribution, disclosure, storage or copying is not permitted. If you have received it in error, please destroy all copies and notify the sender. In messages of a non-business nature, the views and opinions expressed are the author's own and do not necessarily reflect those of the organisation from which it is sent. All emails may be subject to monitoring.

***********************************************************************************
Reply | Threaded
Open this post in threaded view
|

Re: Anova SS1 vSS3 (using v 17.0)

Bruce Weaver
Administrator
Your UNIANOVA syntax lists "method" as one of the factors, but "method" is not included in the description of the data.  Are "assay" and "method" referring to the same variable?

Also, UNIANOVA treats the 6 replicates per ID as if they are independent observations.  But if they are repeated measures on the same unit (ID), then they will not be completely independent of each other, and should be treated accordingly.  One way to do that is via a mixed design (between-within, or split-plot) ANOVA (Analyze - GLM - Repeated Measures).  For this approach, you would have to restructure the data to have one row per ID, with the 6 replicates appearing in 6 columns.  (Look up CASESTOVARS to see how to restructure the data.)

HTH.


Allan Reese (Cefas) wrote
I have a completely balanced dataset and fit the main effects only using
default (type 3) sums of squares.  It reports 0df and 0ss for a factor
with two levels.  Switching to type 1 sums of squares, it gives 1df and
a sensible sum of squares.  Is there a simple reason why sstype(3)
should be so ornery?  

On a point of terminology, are SPSS hierarchical sums of squares
identical to SAS sequential ss?

Data:
Row   id  aliquot assay rep     kt          lkt (the response)
151. 26 A 1 1 44.550 1.648848
152. 26 A 1 2 39.230 1.593615
153. 26 A 1 3 27.186 1.434345
154. 26 A 2 4 37.453 1.573482
155. 26 A 2 5 49.918 1.698259
156. 26 A 2 6 38.341 1.583661
169. 29 B 1 1 13.450 1.128722
170. 29 B 1 2 6.879 .8375258
171. 29 B 1 3 5.764 .7607239
172. 29 B 2 4 13.116 1.117787
173. 29 B 2 5 11.543 1.062301
174. 29 B 2 6 11.044 1.043127
181. 31 C 1 1 11.400 1.056905
182. 31 C 1 2 6.657 .8233027
183. 31 C 1 3 6.375 .8044802
184. 31 C 2 4 10.415 1.017641
185. 31 C 2 5 11.371 1.055816
186. 31 C 2 6 10.770 1.032208
199. 34 D 1 1 10.161 1.006936
200. 34 D 1 2 5.089 .7066036
201. 34 D 1 3 6.668 .8239956
202. 34 D 2 4 10.955 1.039603
203. 34 D 2 5 12.827 1.108116
204. 34 D 2 6 9.793 .9909264
211. 36 E 1 1 17.000 1.230449
212. 36 E 1 2 10.942 1.039088
213. 36 E 1 3 10.096 1.004149
214. 36 E 2 4 16.018 1.204596
215. 36 E 2 5 13.289 1.123484
216. 36 E 2 6 16.138 1.207857


UNIANOVA lkt BY aliquot method replicate
  /METHOD=SSTYPE(1)
  /INTERCEPT=INCLUDE
   /EMMEANS=TABLES(method)
  /CRITERIA=ALPHA(.01)
  /DESIGN=method aliquot replicate.

Univariate Analysis of Variance

Notes
|-------------------------------|---------------|
|Output Created                 |07-Mar-2011 15:|
|                               |25:24          |
|-------------------------------|---------------|
|Comments                       |               |
|---------------|---------------|---------------|
|Input          |Active Dataset |DataSet0       |
|               |---------------|---------------|
|               |Filter         |<none>         |
|               |---------------|---------------|
|               |Weight         |<none>         |
|               |---------------|---------------|
|               |Split File     |<none>         |
|               |---------------|---------------|
|               |N of Rows in   |30             |
|               |Working Data   |               |
|               |File           |               |
|---------------|---------------|---------------|
|Missing Value  |Definition of  |User-defined   |
|Handling       |Missing        |missing values |
|               |               |are treated as |
|               |               |missing.       |
|               |---------------|---------------|
|               |Cases Used     |Statistics are |
|               |               |based on all   |
|               |               |cases with     |
|               |               |valid data for |
|               |               |all variables  |
|               |               |in the model.  |
|---------------|---------------|---------------|
|Syntax                         |UNIANOVA lkt   |
|                               |BY aliquot     |
|                               |method         |
|                               |replicate      |
|                               |/METHOD=SSTYPE |
|                               |(1)            |
|                               |/INTERCEPT=INCL|
|                               |UDE            |
|                               |/EMMEANS=TABLES|
|                               |(method)       |
|                               |               |
|                               |/CRITERIA=ALPHA|
|                               |(.01)          |
|                               |               |
|                               |/DESIGN=method |
|---------------|---------------|---------------|
|Resources      |Processor Time |0:00:00.016    |
|               |---------------|---------------|
|               |Elapsed Time   |0:00:00.031    |
|---------------|---------------|---------------|



[DataSet0]


Between-Subjects Factors
|--------------|--|
|              |N |
|---------|----|--|
|aliquot  |A   |6 |
|         |----|--|
|         |B   |6 |
|         |----|--|
|         |C   |6 |
|         |----|--|
|         |D   |6 |
|         |----|--|
|         |E   |6 |
|---------|----|--|
|method   |1.00|15|
|         |----|--|
|         |2.00|15|
|---------|----|--|
|replicate|1.00|5 |
|         |----|--|
|         |2.00|5 |
|         |----|--|
|         |3.00|5 |
|         |----|--|
|         |4.00|5 |
|         |----|--|
|         |5.00|5 |
|         |----|--|
|         |6.00|5 |
|---------|----|--|



Tests of Between-Subjects Effects
Dependent Variable:lkt
|---------------|---------------|--|-----------|---------|----|
|Source         |Type I Sum of  |df|Mean Square|F        |Sig.|
|               |Squares        |  |           |         |    |
|---------------|---------------|--|-----------|---------|----|
|Corrected Model|2.056a         |9 |.228       |61.462   |.000|
|---------------|---------------|--|-----------|---------|----|
|Intercept      |37.988         |1 |37.988     |10219.052|.000|
|---------------|---------------|--|-----------|---------|----|
|method         |.128           |1 |.128       |34.418   |.000|
|---------------|---------------|--|-----------|---------|----|
|aliquot        |1.743          |4 |.436       |117.219  |.000|
|---------------|---------------|--|-----------|---------|----|
|replicate      |.185           |4 |.046       |12.466   |.000|
|---------------|---------------|--|-----------|---------|----|
|Error          |.074           |20|.004       |         |    |
|---------------|---------------|--|-----------|---------|----|
|Total          |40.119         |30|           |         |    |
|---------------|---------------|--|-----------|---------|----|
|Corrected Total|2.131          |29|           |         |    |
|---------------|---------------|--|-----------|---------|----|
a. R Squared = .965 (Adjusted R Squared = .949)

[method+aliquot+replicate ss sum to model ss]


Estimated Marginal Means

method
Dependent Variable:lkt
|------|----|----------|---------------------------|
|method|Mean|Std. Error|99% Confidence Interval    |
|      |    |          |                           |
|      |    |          |---------------|-----------|
|      |    |          |Lower Bound    |Upper Bound|
|------|----|----------|---------------|-----------|
|1.00  |.a  |.         |.              |.          |
|------|----|----------|---------------|-----------|
|2.00  |.a  |.         |.              |.          |
|------|----|----------|---------------|-----------|
a. This modified population marginal mean is not estimable.



UNIANOVA lkt BY aliquot method replicate
  /METHOD=SSTYPE(3)
  /INTERCEPT=INCLUDE
   /EMMEANS=TABLES(method)
  /CRITERIA=ALPHA(.01)
  /DESIGN=method aliquot replicate.




Tests of Between-Subjects Effects
Dependent Variable:lkt
|---------------|---------------|--|-----------|---------|----|
|Source         |Type III Sum   |df|Mean Square|F        |Sig.|
|               |of Squares     |  |           |         |    |
|---------------|---------------|--|-----------|---------|----|
|Corrected Model|2.056a         |9 |.228       |61.462   |.000|
|---------------|---------------|--|-----------|---------|----|
|Intercept      |37.988         |1 |37.988     |10219.052|.000|
|---------------|---------------|--|-----------|---------|----|
|method         |.000           |0 |.          |.        |.   |
|---------------|---------------|--|-----------|---------|----|
|aliquot        |1.743          |4 |.436       |117.219  |.000|
|---------------|---------------|--|-----------|---------|----|
|replicate      |.185           |4 |.046       |12.466   |.000|
|---------------|---------------|--|-----------|---------|----|
|Error          |.074           |20|.004       |         |    |
|---------------|---------------|--|-----------|---------|----|
|Total          |40.119         |30|           |         |    |
|---------------|---------------|--|-----------|---------|----|
|Corrected Total|2.131          |29|           |         |    |
|---------------|---------------|--|-----------|---------|----|
a. R Squared = .965 (Adjusted R Squared = .949)



Estimated Marginal Means

method
Dependent Variable:lkt
|------|----|----------|---------------------------|
|method|Mean|Std. Error|99% Confidence Interval    |
|      |    |          |                           |
|      |    |          |---------------|-----------|
|      |    |          |Lower Bound    |Upper Bound|
|------|----|----------|---------------|-----------|
|1.00  |.a  |.         |.              |.          |
|------|----|----------|---------------|-----------|
|2.00  |.a  |.         |.              |.          |
|------|----|----------|---------------|-----------|
a. This modified population marginal mean is not estimable.

[model ss does not equal sum of parts]


Thanks for any comments
Allan





R Allan Reese
Senior statistician, Cefas
The Nothe, Weymouth DT4 8UB

Tel: +44 (0)1305 206614 -direct
Fax: +44 (0)1305 206601

www.cefas.co.uk
***********************************************************************************

This email and any attachments are intended for the named recipient only. Its unauthorised use, distribution, disclosure, storage or copying is not permitted. If you have received it in error, please destroy all copies and notify the sender. In messages of a non-business nature, the views and opinions expressed are the author's own and do not necessarily reflect those of the organisation from which it is sent. All emails may be subject to monitoring.

***********************************************************************************
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Anova SS1 vSS3 (using v 17.0)

Mike
In reply to this post by Allan Reese (Cefas)
My first reaction is that you do not have a full factorial design, that
is, not all levels of you independent variables appear with the other
levels of the independent variables.  This becomes clear if you use
the Means procedure to examine the structure of the data, as in the
following: (Note: method=assay)
 
means table=lkt by method by aliquot by rep.
 
It seems that a better way of thinking of the design is that replicates
are nested within aliquot which is crossed with method (replicates 1-3
appear with method=1 but replicates 4-6 appear with method=2; all
levels of aliquot appears with all levels of method).
 
It's been a while since I was concerned with the different methods
of calculating the SS but having an incomplete factorial design
is probably throwing off the Type 3 calculations.  It might be
throwing off the Type 1 calculations as well (just because it produces
results doesn't mean that they are correct).
 
I'll let someone with greater/more recent familiarity with these issues
to take it from here.
 
-Mike Palij
New York University
 
 
----- Original Message -----
Sent: Monday, March 07, 2011 10:53 AM
Subject: Anova SS1 vSS3 (using v 17.0)

I have a completely balanced dataset and fit the main effects only using
default (type 3) sums of squares. It reports 0df and 0ss for a factor
with two levels. Switching to type 1 sums of squares, it gives 1df and
a sensible sum of squares. Is there a simple reason why sstype(3)
should be so ornery?

On a point of terminology, are SPSS hierarchical sums of squares
identical to SAS sequential ss?

Data:
Row id aliquot assay rep kt lkt (the response)
151. 26 A 1 1 44.550 1.648848
152. 26 A 1 2 39.230 1.593615
153. 26 A 1 3 27.186 1.434345
154. 26 A 2 4 37.453 1.573482
155. 26 A 2 5 49.918 1.698259
156. 26 A 2 6 38.341 1.583661
169. 29 B 1 1 13.450 1.128722
170. 29 B 1 2 6.879 .8375258
171. 29 B 1 3 5.764 .7607239
172. 29 B 2 4 13.116 1.117787
173. 29 B 2 5 11.543 1.062301
174. 29 B 2 6 11.044 1.043127
181. 31 C 1 1 11.400 1.056905
182. 31 C 1 2 6.657 .8233027
183. 31 C 1 3 6.375 .8044802
184. 31 C 2 4 10.415 1.017641
185. 31 C 2 5 11.371 1.055816
186. 31 C 2 6 10.770 1.032208
199. 34 D 1 1 10.161 1.006936
200. 34 D 1 2 5.089 .7066036
201. 34 D 1 3 6.668 .8239956
202. 34 D 2 4 10.955 1.039603
203. 34 D 2 5 12.827 1.108116
204. 34 D 2 6 9.793 .9909264
211. 36 E 1 1 17.000 1.230449
212. 36 E 1 2 10.942 1.039088
213. 36 E 1 3 10.096 1.004149
214. 36 E 2 4 16.018 1.204596
215. 36 E 2 5 13.289 1.123484
216. 36 E 2 6 16.138 1.207857


UNIANOVA lkt BY aliquot method replicate
/METHOD=SSTYPE(1)
/INTERCEPT=INCLUDE
/EMMEANS=TABLES(method)
/CRITERIA=ALPHA(.01)
/DESIGN=method aliquot replicate.

Univariate Analysis of Variance

Notes
|-------------------------------|---------------|
|Output Created |07-Mar-2011 15:|
| |25:24 |
|-------------------------------|---------------|
|Comments | |
|---------------|---------------|---------------|
|Input |Active Dataset |DataSet0 |
| |---------------|---------------|
| |Filter |<none> |
| |---------------|---------------|
| |Weight |<none> |
| |---------------|---------------|
| |Split File |<none> |
| |---------------|---------------|
| |N of Rows in |30 |
| |Working Data | |
| |File | |
|---------------|---------------|---------------|
|Missing Value |Definition of |User-defined |
|Handling |Missing |missing values |
| | |are treated as |
| | |missing. |
| |---------------|---------------|
| |Cases Used |Statistics are |
| | |based on all |
| | |cases with |
| | |valid data for |
| | |all variables |
| | |in the model. |
|---------------|---------------|---------------|
|Syntax |UNIANOVA lkt |
| |BY aliquot |
| |method |
| |replicate |
| |/METHOD=SSTYPE |
| |(1) |
| |/INTERCEPT=INCL|
| |UDE |
| |/EMMEANS=TABLES|
| |(method) |
| | |
| |/CRITERIA=ALPHA|
| |(.01) |
| | |
| |/DESIGN=method |
|---------------|---------------|---------------|
|Resources |Processor Time |0:00:00.016 |
| |---------------|---------------|
| |Elapsed Time |0:00:00.031 |
|---------------|---------------|---------------|



[DataSet0]


Between-Subjects Factors
|--------------|--|
| |N |
|---------|----|--|
|aliquot |A |6 |
| |----|--|
| |B |6 |
| |----|--|
| |C |6 |
| |----|--|
| |D |6 |
| |----|--|
| |E |6 |
|---------|----|--|
|method |1.00|15|
| |----|--|
| |2.00|15|
|---------|----|--|
|replicate|1.00|5 |
| |----|--|
| |2.00|5 |
| |----|--|
| |3.00|5 |
| |----|--|
| |4.00|5 |
| |----|--|
| |5.00|5 |
| |----|--|
| |6.00|5 |
|---------|----|--|



Tests of Between-Subjects Effects
Dependent Variable:lkt
|---------------|---------------|--|-----------|---------|----|
|Source |Type I Sum of |df|Mean Square|F |Sig.|
| |Squares | | | | |
|---------------|---------------|--|-----------|---------|----|
|Corrected Model|2.056a |9 |.228 |61.462 |.000|
|---------------|---------------|--|-----------|---------|----|
|Intercept |37.988 |1 |37.988 |10219.052|.000|
|---------------|---------------|--|-----------|---------|----|
|method |.128 |1 |.128 |34.418 |.000|
|---------------|---------------|--|-----------|---------|----|
|aliquot |1.743 |4 |.436 |117.219 |.000|
|---------------|---------------|--|-----------|---------|----|
|replicate |.185 |4 |.046 |12.466 |.000|
|---------------|---------------|--|-----------|---------|----|
|Error |.074 |20|.004 | | |
|---------------|---------------|--|-----------|---------|----|
|Total |40.119 |30| | | |
|---------------|---------------|--|-----------|---------|----|
|Corrected Total|2.131 |29| | | |
|---------------|---------------|--|-----------|---------|----|
a. R Squared = .965 (Adjusted R Squared = .949)

[method+aliquot+replicate ss sum to model ss]


Estimated Marginal Means

method
Dependent Variable:lkt
|------|----|----------|---------------------------|
|method|Mean|Std. Error|99% Confidence Interval |
| | | | |
| | | |---------------|-----------|
| | | |Lower Bound |Upper Bound|
|------|----|----------|---------------|-----------|
|1.00 |.a |. |. |. |
|------|----|----------|---------------|-----------|
|2.00 |.a |. |. |. |
|------|----|----------|---------------|-----------|
a. This modified population marginal mean is not estimable.



UNIANOVA lkt BY aliquot method replicate
/METHOD=SSTYPE(3)
/INTERCEPT=INCLUDE
/EMMEANS=TABLES(method)
/CRITERIA=ALPHA(.01)
/DESIGN=method aliquot replicate.




Tests of Between-Subjects Effects
Dependent Variable:lkt
|---------------|---------------|--|-----------|---------|----|
|Source |Type III Sum |df|Mean Square|F |Sig.|
| |of Squares | | | | |
|---------------|---------------|--|-----------|---------|----|
|Corrected Model|2.056a |9 |.228 |61.462 |.000|
|---------------|---------------|--|-----------|---------|----|
|Intercept |37.988 |1 |37.988 |10219.052|.000|
|---------------|---------------|--|-----------|---------|----|
|method |.000 |0 |. |. |. |
|---------------|---------------|--|-----------|---------|----|
|aliquot |1.743 |4 |.436 |117.219 |.000|
|---------------|---------------|--|-----------|---------|----|
|replicate |.185 |4 |.046 |12.466 |.000|
|---------------|---------------|--|-----------|---------|----|
|Error |.074 |20|.004 | | |
|---------------|---------------|--|-----------|---------|----|
|Total |40.119 |30| | | |
|---------------|---------------|--|-----------|---------|----|
|Corrected Total|2.131 |29| | | |
|---------------|---------------|--|-----------|---------|----|
a. R Squared = .965 (Adjusted R Squared = .949)



Estimated Marginal Means

method
Dependent Variable:lkt
|------|----|----------|---------------------------|
|method|Mean|Std. Error|99% Confidence Interval |
| | | | |
| | | |---------------|-----------|
| | | |Lower Bound |Upper Bound|
|------|----|----------|---------------|-----------|
|1.00 |.a |. |. |. |
|------|----|----------|---------------|-----------|
|2.00 |.a |. |. |. |
|------|----|----------|---------------|-----------|
a. This modified population marginal mean is not estimable.

[model ss does not equal sum of parts]


Thanks for any comments
Allan





R Allan Reese
Senior statistician, Cefas
The Nothe, Weymouth DT4 8UB

Tel: +44 (0)1305 206614 -direct
Fax: +44 (0)1305 206601

www.cefas.co.uk


***********************************************************************************

This email and any attachments are intended for the named recipient only. Its unauthorised use, distribution, disclosure, storage or copying is not permitted. If you have received it in error, please destroy all copies and notify the sender. In messages of a non-business nature, the views and opinions expressed are the author's own and do not necessarily reflect those of the organisation from which it is sent. All emails may be subject to monitoring.

***********************************************************************************
Reply | Threaded
Open this post in threaded view
|

Re: Anova SS1 vSS3 (using v 17.0)

Bruce Weaver
Administrator
In reply to this post by Bruce Weaver
I've now had time to take another look at this.  It seems to me that both Method (aka "assay") and Replicate are within-Ss (or repeated measures) factors, with 2 and 3 levels respectively.  It is also clear that ALIQUOT is completely confounded with ID.  If all of this is correct, then have a 2x3 repeated measures ANOVA, with repeated measures on both factors.  To run that model in the usual fashion, you have to restructure the data from LONG to WIDE.


data list list / Row id (2f5.0) aliquot(a1) assay rep (2f2.0) kt lkt (2f8.2).
begin data
151. 26 A 1 1 44.550 1.648848
152. 26 A 1 2 39.230 1.593615
153. 26 A 1 3 27.186 1.434345
154. 26 A 2 4 37.453 1.573482
155. 26 A 2 5 49.918 1.698259
156. 26 A 2 6 38.341 1.583661
169. 29 B 1 1 13.450 1.128722
170. 29 B 1 2 6.879 .8375258
171. 29 B 1 3 5.764 .7607239
172. 29 B 2 4 13.116 1.117787
173. 29 B 2 5 11.543 1.062301
174. 29 B 2 6 11.044 1.043127
181. 31 C 1 1 11.400 1.056905
182. 31 C 1 2 6.657 .8233027
183. 31 C 1 3 6.375 .8044802
184. 31 C 2 4 10.415 1.017641
185. 31 C 2 5 11.371 1.055816
186. 31 C 2 6 10.770 1.032208
199. 34 D 1 1 10.161 1.006936
200. 34 D 1 2 5.089 .7066036
201. 34 D 1 3 6.668 .8239956
202. 34 D 2 4 10.955 1.039603
203. 34 D 2 5 12.827 1.108116
204. 34 D 2 6 9.793 .9909264
211. 36 E 1 1 17.000 1.230449
212. 36 E 1 2 10.942 1.039088
213. 36 E 1 3 10.096 1.004149
214. 36 E 2 4 16.018 1.204596
215. 36 E 2 5 13.289 1.123484
216. 36 E 2 6 16.138 1.207857
end data.

* GLM Repeated Measures needs WIDE file format, so restructure.

CASESTOVARS
  /ID=id
  /INDEX=assay rep
  /DROP = row
  /GROUPBY=VARIABLE.

* Try model with ALIQUOT as a Between-Ss factor.

GLM lkt.1.1 lkt.1.2 lkt.1.3 lkt.2.4 lkt.2.5 lkt.2.6 BY aliquot
  /WSFACTOR=method 2 Polynomial rep 3 Polynomial
  /METHOD=SSTYPE(3)
  /EMMEANS=TABLES(aliquot)
  /EMMEANS=TABLES(method)
  /EMMEANS=TABLES(rep)
  /EMMEANS=TABLES(aliquot*method)
  /EMMEANS=TABLES(aliquot*rep)
  /EMMEANS=TABLES(method*rep)
  /EMMEANS=TABLES(aliquot*method*rep)
  /CRITERIA=ALPHA(.05)
  /WSDESIGN=method rep method*rep
  /DESIGN=aliquot.

* That model does not work, because ALIQUOT is completely
* confounded with ID -- i.e., there is only one ID for
* each level of ALIQUOT.

* Run the model without the Between-Ss factor ALIQUOT.

GLM lkt.1.1 lkt.1.2 lkt.1.3 lkt.2.4 lkt.2.5 lkt.2.6
  /WSFACTOR=method 2 Polynomial rep 3 Polynomial
  /METHOD=SSTYPE(3)
  /PLOT=PROFILE(rep*method)
  /EMMEANS=TABLES(method)
  /EMMEANS=TABLES(rep)
  /EMMEANS=TABLES(method*rep) compare(rep) adj(bonferroni)
  /CRITERIA=ALPHA(.05)
  /WSDESIGN=method rep method*rep.


HTH.


Bruce Weaver wrote
Your UNIANOVA syntax lists "method" as one of the factors, but "method" is not included in the description of the data.  Are "assay" and "method" referring to the same variable?

Also, UNIANOVA treats the 6 replicates per ID as if they are independent observations.  But if they are repeated measures on the same unit (ID), then they will not be completely independent of each other, and should be treated accordingly.  One way to do that is via a mixed design (between-within, or split-plot) ANOVA (Analyze - GLM - Repeated Measures).  For this approach, you would have to restructure the data to have one row per ID, with the 6 replicates appearing in 6 columns.  (Look up CASESTOVARS to see how to restructure the data.)

HTH.


Allan Reese (Cefas) wrote
I have a completely balanced dataset and fit the main effects only using
default (type 3) sums of squares.  It reports 0df and 0ss for a factor
with two levels.  Switching to type 1 sums of squares, it gives 1df and
a sensible sum of squares.  Is there a simple reason why sstype(3)
should be so ornery?  

On a point of terminology, are SPSS hierarchical sums of squares
identical to SAS sequential ss?

Data:
Row   id  aliquot assay rep     kt          lkt (the response)
151. 26 A 1 1 44.550 1.648848
152. 26 A 1 2 39.230 1.593615
153. 26 A 1 3 27.186 1.434345
154. 26 A 2 4 37.453 1.573482
155. 26 A 2 5 49.918 1.698259
156. 26 A 2 6 38.341 1.583661
169. 29 B 1 1 13.450 1.128722
170. 29 B 1 2 6.879 .8375258
171. 29 B 1 3 5.764 .7607239
172. 29 B 2 4 13.116 1.117787
173. 29 B 2 5 11.543 1.062301
174. 29 B 2 6 11.044 1.043127
181. 31 C 1 1 11.400 1.056905
182. 31 C 1 2 6.657 .8233027
183. 31 C 1 3 6.375 .8044802
184. 31 C 2 4 10.415 1.017641
185. 31 C 2 5 11.371 1.055816
186. 31 C 2 6 10.770 1.032208
199. 34 D 1 1 10.161 1.006936
200. 34 D 1 2 5.089 .7066036
201. 34 D 1 3 6.668 .8239956
202. 34 D 2 4 10.955 1.039603
203. 34 D 2 5 12.827 1.108116
204. 34 D 2 6 9.793 .9909264
211. 36 E 1 1 17.000 1.230449
212. 36 E 1 2 10.942 1.039088
213. 36 E 1 3 10.096 1.004149
214. 36 E 2 4 16.018 1.204596
215. 36 E 2 5 13.289 1.123484
216. 36 E 2 6 16.138 1.207857


UNIANOVA lkt BY aliquot method replicate
  /METHOD=SSTYPE(1)
  /INTERCEPT=INCLUDE
   /EMMEANS=TABLES(method)
  /CRITERIA=ALPHA(.01)
  /DESIGN=method aliquot replicate.

Univariate Analysis of Variance

Notes
|-------------------------------|---------------|
|Output Created                 |07-Mar-2011 15:|
|                               |25:24          |
|-------------------------------|---------------|
|Comments                       |               |
|---------------|---------------|---------------|
|Input          |Active Dataset |DataSet0       |
|               |---------------|---------------|
|               |Filter         |<none>         |
|               |---------------|---------------|
|               |Weight         |<none>         |
|               |---------------|---------------|
|               |Split File     |<none>         |
|               |---------------|---------------|
|               |N of Rows in   |30             |
|               |Working Data   |               |
|               |File           |               |
|---------------|---------------|---------------|
|Missing Value  |Definition of  |User-defined   |
|Handling       |Missing        |missing values |
|               |               |are treated as |
|               |               |missing.       |
|               |---------------|---------------|
|               |Cases Used     |Statistics are |
|               |               |based on all   |
|               |               |cases with     |
|               |               |valid data for |
|               |               |all variables  |
|               |               |in the model.  |
|---------------|---------------|---------------|
|Syntax                         |UNIANOVA lkt   |
|                               |BY aliquot     |
|                               |method         |
|                               |replicate      |
|                               |/METHOD=SSTYPE |
|                               |(1)            |
|                               |/INTERCEPT=INCL|
|                               |UDE            |
|                               |/EMMEANS=TABLES|
|                               |(method)       |
|                               |               |
|                               |/CRITERIA=ALPHA|
|                               |(.01)          |
|                               |               |
|                               |/DESIGN=method |
|---------------|---------------|---------------|
|Resources      |Processor Time |0:00:00.016    |
|               |---------------|---------------|
|               |Elapsed Time   |0:00:00.031    |
|---------------|---------------|---------------|



[DataSet0]


Between-Subjects Factors
|--------------|--|
|              |N |
|---------|----|--|
|aliquot  |A   |6 |
|         |----|--|
|         |B   |6 |
|         |----|--|
|         |C   |6 |
|         |----|--|
|         |D   |6 |
|         |----|--|
|         |E   |6 |
|---------|----|--|
|method   |1.00|15|
|         |----|--|
|         |2.00|15|
|---------|----|--|
|replicate|1.00|5 |
|         |----|--|
|         |2.00|5 |
|         |----|--|
|         |3.00|5 |
|         |----|--|
|         |4.00|5 |
|         |----|--|
|         |5.00|5 |
|         |----|--|
|         |6.00|5 |
|---------|----|--|



Tests of Between-Subjects Effects
Dependent Variable:lkt
|---------------|---------------|--|-----------|---------|----|
|Source         |Type I Sum of  |df|Mean Square|F        |Sig.|
|               |Squares        |  |           |         |    |
|---------------|---------------|--|-----------|---------|----|
|Corrected Model|2.056a         |9 |.228       |61.462   |.000|
|---------------|---------------|--|-----------|---------|----|
|Intercept      |37.988         |1 |37.988     |10219.052|.000|
|---------------|---------------|--|-----------|---------|----|
|method         |.128           |1 |.128       |34.418   |.000|
|---------------|---------------|--|-----------|---------|----|
|aliquot        |1.743          |4 |.436       |117.219  |.000|
|---------------|---------------|--|-----------|---------|----|
|replicate      |.185           |4 |.046       |12.466   |.000|
|---------------|---------------|--|-----------|---------|----|
|Error          |.074           |20|.004       |         |    |
|---------------|---------------|--|-----------|---------|----|
|Total          |40.119         |30|           |         |    |
|---------------|---------------|--|-----------|---------|----|
|Corrected Total|2.131          |29|           |         |    |
|---------------|---------------|--|-----------|---------|----|
a. R Squared = .965 (Adjusted R Squared = .949)

[method+aliquot+replicate ss sum to model ss]


Estimated Marginal Means

method
Dependent Variable:lkt
|------|----|----------|---------------------------|
|method|Mean|Std. Error|99% Confidence Interval    |
|      |    |          |                           |
|      |    |          |---------------|-----------|
|      |    |          |Lower Bound    |Upper Bound|
|------|----|----------|---------------|-----------|
|1.00  |.a  |.         |.              |.          |
|------|----|----------|---------------|-----------|
|2.00  |.a  |.         |.              |.          |
|------|----|----------|---------------|-----------|
a. This modified population marginal mean is not estimable.



UNIANOVA lkt BY aliquot method replicate
  /METHOD=SSTYPE(3)
  /INTERCEPT=INCLUDE
   /EMMEANS=TABLES(method)
  /CRITERIA=ALPHA(.01)
  /DESIGN=method aliquot replicate.




Tests of Between-Subjects Effects
Dependent Variable:lkt
|---------------|---------------|--|-----------|---------|----|
|Source         |Type III Sum   |df|Mean Square|F        |Sig.|
|               |of Squares     |  |           |         |    |
|---------------|---------------|--|-----------|---------|----|
|Corrected Model|2.056a         |9 |.228       |61.462   |.000|
|---------------|---------------|--|-----------|---------|----|
|Intercept      |37.988         |1 |37.988     |10219.052|.000|
|---------------|---------------|--|-----------|---------|----|
|method         |.000           |0 |.          |.        |.   |
|---------------|---------------|--|-----------|---------|----|
|aliquot        |1.743          |4 |.436       |117.219  |.000|
|---------------|---------------|--|-----------|---------|----|
|replicate      |.185           |4 |.046       |12.466   |.000|
|---------------|---------------|--|-----------|---------|----|
|Error          |.074           |20|.004       |         |    |
|---------------|---------------|--|-----------|---------|----|
|Total          |40.119         |30|           |         |    |
|---------------|---------------|--|-----------|---------|----|
|Corrected Total|2.131          |29|           |         |    |
|---------------|---------------|--|-----------|---------|----|
a. R Squared = .965 (Adjusted R Squared = .949)



Estimated Marginal Means

method
Dependent Variable:lkt
|------|----|----------|---------------------------|
|method|Mean|Std. Error|99% Confidence Interval    |
|      |    |          |                           |
|      |    |          |---------------|-----------|
|      |    |          |Lower Bound    |Upper Bound|
|------|----|----------|---------------|-----------|
|1.00  |.a  |.         |.              |.          |
|------|----|----------|---------------|-----------|
|2.00  |.a  |.         |.              |.          |
|------|----|----------|---------------|-----------|
a. This modified population marginal mean is not estimable.

[model ss does not equal sum of parts]


Thanks for any comments
Allan





R Allan Reese
Senior statistician, Cefas
The Nothe, Weymouth DT4 8UB

Tel: +44 (0)1305 206614 -direct
Fax: +44 (0)1305 206601

www.cefas.co.uk
***********************************************************************************

This email and any attachments are intended for the named recipient only. Its unauthorised use, distribution, disclosure, storage or copying is not permitted. If you have received it in error, please destroy all copies and notify the sender. In messages of a non-business nature, the views and opinions expressed are the author's own and do not necessarily reflect those of the organisation from which it is sent. All emails may be subject to monitoring.

***********************************************************************************
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Anova SS1 vSS3 (using v 17.0)

Mike
It looks like Bruce has gotten the right design and analysis
though one basic question that needs to be asked is what
is the unit of analysis.  Is it a single person with repeated
measures (are there only 5 persons worth of data?) or
some other unit.
 
In any event, one would not want to treat this data as a
between-subjects design or that one "row" is independent
of any other row.  Using Bruce's wide data, the correlations
between the "repeated measures" is almost absurd. One
can look at the pairwise corrs with the following:
 
corr var=lkt.1.1 lkt.1.2 lkt.1.3 lkt.2.4 lkt.2.5 lkt.2.6/
  stat=desc.
 
or with the reliability procedure which provides a summary
of the pairwise correlations:
 
reliab var=lkt.1.1 lkt.1.2 lkt.1.3 lkt.2.4 lkt.2.5 lkt.2.6/
  scale(LKT)=lkt.1.1 lkt.1.2 lkt.1.3 lkt.2.4 lkt.2.5 lkt.2.6/
  summary=corr total means.
 
From the output of the reliability procedure:

Summary Item Statistics

 

Mean

Minimum

Maximum

Range

Maximum / Minimum

Variance

N of Items

Item Means

1.125

.966

1.214

.249

1.258

.013

6

Inter-Item Correlations

.976

.948

.998

.049

1.052

.000

6

 
 
A mean correlation of 0.976? Range .948-.998? 
That's just off the hook.  It also explains why the repeated
measures ANOVA provides so many significant results for
N=5.
 
-Mike Palij
New York University
 
 
----- Original Message -----
From: "Bruce Weaver" <[hidden email]>
Sent: Monday, March 07, 2011 4:38 PM
Subject: Re: Anova SS1 vSS3 (using v 17.0)

> I've now had time to take another look at this.  It seems to me that both

> Method (aka "assay") and Replicate are within-Ss (or repeated measures)
> factors, with 2 and 3 levels respectively.  It is also clear that ALIQUOT is
> completely confounded with ID.  If all of this is correct, then have a 2x3
> repeated measures ANOVA, with repeated measures on both factors.  To run
> that model in the usual fashion, you have to restructure the data from LONG
> to WIDE.
>
>
> data list list / Row id (2f5.0) aliquot(a1) assay rep (2f2.0) kt lkt
> (2f8.2).
> begin data
> 151. 26 A 1 1 44.550 1.648848
> 152. 26 A 1 2 39.230 1.593615
> 153. 26 A 1 3 27.186 1.434345
> 154. 26 A 2 4 37.453 1.573482
> 155. 26 A 2 5 49.918 1.698259
> 156. 26 A 2 6 38.341 1.583661
> 169. 29 B 1 1 13.450 1.128722
> 170. 29 B 1 2 6.879 .8375258
> 171. 29 B 1 3 5.764 .7607239
> 172. 29 B 2 4 13.116 1.117787
> 173. 29 B 2 5 11.543 1.062301
> 174. 29 B 2 6 11.044 1.043127
> 181. 31 C 1 1 11.400 1.056905
> 182. 31 C 1 2 6.657 .8233027
> 183. 31 C 1 3 6.375 .8044802
> 184. 31 C 2 4 10.415 1.017641
> 185. 31 C 2 5 11.371 1.055816
> 186. 31 C 2 6 10.770 1.032208
> 199. 34 D 1 1 10.161 1.006936
> 200. 34 D 1 2 5.089 .7066036
> 201. 34 D 1 3 6.668 .8239956
> 202. 34 D 2 4 10.955 1.039603
> 203. 34 D 2 5 12.827 1.108116
> 204. 34 D 2 6 9.793 .9909264
> 211. 36 E 1 1 17.000 1.230449
> 212. 36 E 1 2 10.942 1.039088
> 213. 36 E 1 3 10.096 1.004149
> 214. 36 E 2 4 16.018 1.204596
> 215. 36 E 2 5 13.289 1.123484
> 216. 36 E 2 6 16.138 1.207857
> end data.
>
> * GLM Repeated Measures needs WIDE file format, so restructure.
>
> CASESTOVARS
>  /ID=id
>  /INDEX=assay rep
>  /DROP = row
>  /GROUPBY=VARIABLE.
>
> * Try model with ALIQUOT as a Between-Ss factor.
>
> GLM lkt.1.1 lkt.1.2 lkt.1.3 lkt.2.4 lkt.2.5 lkt.2.6 BY aliquot
>  /WSFACTOR=method 2 Polynomial rep 3 Polynomial
>  /METHOD=SSTYPE(3)
>  /EMMEANS=TABLES(aliquot)
>  /EMMEANS=TABLES(method)
>  /EMMEANS=TABLES(rep)
>  /EMMEANS=TABLES(aliquot*method)
>  /EMMEANS=TABLES(aliquot*rep)
>  /EMMEANS=TABLES(method*rep)
>  /EMMEANS=TABLES(aliquot*method*rep)
>  /CRITERIA=ALPHA(.05)
>  /WSDESIGN=method rep method*rep
>  /DESIGN=aliquot.
>
> * That model does not work, because ALIQUOT is completely
> * confounded with ID -- i.e., there is only one ID for
> * each level of ALIQUOT.
>
> * Run the model without the Between-Ss factor ALIQUOT.
>
> GLM lkt.1.1 lkt.1.2 lkt.1.3 lkt.2.4 lkt.2.5 lkt.2.6
>  /WSFACTOR=method 2 Polynomial rep 3 Polynomial
>  /METHOD=SSTYPE(3)
>  /PLOT=PROFILE(rep*method)
>  /EMMEANS=TABLES(method)
>  /EMMEANS=TABLES(rep)
>  /EMMEANS=TABLES(method*rep) compare(rep) adj(bonferroni)
>  /CRITERIA=ALPHA(.05)
>  /WSDESIGN=method rep method*rep.
>
>
> HTH.
>
>
>
> Bruce Weaver wrote:
>>
>> Your UNIANOVA syntax lists "method" as one of the factors, but "method" is
>> not included in the description of the data.  Are "assay" and "method"
>> referring to the same variable?
>>
>> Also, UNIANOVA treats the 6 replicates per ID as if they are independent
>> observations.  But if they are repeated measures on the same unit (ID),
>> then they will not be completely independent of each other, and should be
>> treated accordingly.  One way to do that is via a mixed design
>> (between-within, or split-plot) ANOVA (Analyze - GLM - Repeated Measures).
>> For this approach, you would have to restructure the data to have one row
>> per ID, with the 6 replicates appearing in 6 columns.  (Look up
>> CASESTOVARS to see how to restructure the data.)
>>
>> HTH.
>>
>>
>>
>> Allan Reese (Cefas) wrote:
>>>
>>> I have a completely balanced dataset and fit the main effects only using
>>> default (type 3) sums of squares.  It reports 0df and 0ss for a factor
>>> with two levels.  Switching to type 1 sums of squares, it gives 1df and
>>> a sensible sum of squares.  Is there a simple reason why sstype(3)
>>> should be so ornery?
>>>
>>> On a point of terminology, are SPSS hierarchical sums of squares
>>> identical to SAS sequential ss?
>>>
>>> Data:
>>> Row   id  aliquot assay rep     kt          lkt (the response)
>>> 151. 26      A        1      1        44.550  1.648848
>>> 152. 26      A        1      2        39.230  1.593615
>>> 153. 26      A        1      3        27.186  1.434345
>>> 154. 26      A        2      4        37.453  1.573482
>>> 155. 26      A        2      5        49.918  1.698259
>>> 156. 26      A        2      6        38.341  1.583661
>>> 169. 29      B        1      1        13.450  1.128722
>>> 170. 29      B        1      2       6.879    .8375258
>>> 171. 29      B        1      3       5.764    .7607239
>>> 172. 29      B        2      4        13.116  1.117787
>>> 173. 29      B        2      5        11.543  1.062301
>>> 174. 29      B        2      6        11.044  1.043127
>>> 181. 31      C        1      1        11.400  1.056905
>>> 182. 31      C        1      2       6.657    .8233027
>>> 183. 31      C        1      3       6.375    .8044802
>>> 184. 31      C        2      4        10.415  1.017641
>>> 185. 31      C        2      5        11.371  1.055816
>>> 186. 31      C        2      6        10.770  1.032208
>>> 199. 34      D        1      1        10.161  1.006936
>>> 200. 34      D        1      2       5.089    .7066036
>>> 201. 34      D        1      3       6.668    .8239956
>>> 202. 34      D        2      4        10.955  1.039603
>>> 203. 34      D        2      5        12.827  1.108116
>>> 204. 34      D        2      6       9.793    .9909264
>>> 211. 36      E        1      1        17.000  1.230449
>>> 212. 36      E        1      2        10.942  1.039088
>>> 213. 36      E        1      3        10.096  1.004149
>>> 214. 36      E        2      4        16.018  1.204596
>>> 215. 36      E        2      5        13.289  1.123484
>>> 216. 36      E        2      6        16.138  1.207857
>>>
>>>
>>> UNIANOVA lkt BY aliquot method replicate
>>>   /METHOD=SSTYPE(1)
>>>   /INTERCEPT=INCLUDE
>>>    /EMMEANS=TABLES(method)
>>>   /CRITERIA=ALPHA(.01)
>>>   /DESIGN=method aliquot replicate.
>>>
>>> Univariate Analysis of Variance
>>>
>>> Notes
>>> |-------------------------------|---------------|
>>> |Output Created                 |07-Mar-2011 15:|
>>> |                               |25:24          |
>>> |-------------------------------|---------------|
>>> |Comments                       |               |
>>> |---------------|---------------|---------------|
>>> |Input          |Active Dataset |DataSet0       |
>>> |               |---------------|---------------|
>>> |               |Filter         |         |
>>> |               |---------------|---------------|
>>> |               |Weight         |         |
>>> |               |---------------|---------------|
>>> |               |Split File     |         |
>>> |               |---------------|---------------|
>>> |               |N of Rows in   |30             |
>>> |               |Working Data   |               |
>>> |               |File           |               |
>>> |---------------|---------------|---------------|
>>> |Missing Value  |Definition of  |User-defined   |
>>> |Handling       |Missing        |missing values |
>>> |               |               |are treated as |
>>> |               |               |missing.       |
>>> |               |---------------|---------------|
>>> |               |Cases Used     |Statistics are |
>>> |               |               |based on all   |
>>> |               |               |cases with     |
>>> |               |               |valid data for |
>>> |               |               |all variables  |
>>> |               |               |in the model.  |
>>> |---------------|---------------|---------------|
>>> |Syntax                         |UNIANOVA lkt   |
>>> |                               |BY aliquot     |
>>> |                               |method         |
>>> |                               |replicate      |
>>> |                               |/METHOD=SSTYPE |
>>> |                               |(1)            |
>>> |                               |/INTERCEPT=INCL|
>>> |                               |UDE            |
>>> |                               |/EMMEANS=TABLES|
>>> |                               |(method)       |
>>> |                               |               |
>>> |                               |/CRITERIA=ALPHA|
>>> |                               |(.01)          |
>>> |                               |               |
>>> |                               |/DESIGN=method |
>>> |---------------|---------------|---------------|
>>> |Resources      |Processor Time |0:00:00.016    |
>>> |               |---------------|---------------|
>>> |               |Elapsed Time   |0:00:00.031    |
>>> |---------------|---------------|---------------|
>>>
>>>
>>>
>>> [DataSet0]
>>>
>>>
>>> Between-Subjects Factors
>>> |--------------|--|
>>> |              |N |
>>> |---------|----|--|
>>> |aliquot  |A   |6 |
>>> |         |----|--|
>>> |         |B   |6 |
>>> |         |----|--|
>>> |         |C   |6 |
>>> |         |----|--|
>>> |         |D   |6 |
>>> |         |----|--|
>>> |         |E   |6 |
>>> |---------|----|--|
>>> |method   |1.00|15|
>>> |         |----|--|
>>> |         |2.00|15|
>>> |---------|----|--|
>>> |replicate|1.00|5 |
>>> |         |----|--|
>>> |         |2.00|5 |
>>> |         |----|--|
>>> |         |3.00|5 |
>>> |         |----|--|
>>> |         |4.00|5 |
>>> |         |----|--|
>>> |         |5.00|5 |
>>> |         |----|--|
>>> |         |6.00|5 |
>>> |---------|----|--|
>>>
>>>
>>>
>>> Tests of Between-Subjects Effects
>>> Dependent Variable:lkt
>>> |---------------|---------------|--|-----------|---------|----|
>>> |Source         |Type I Sum of  |df|Mean Square|F        |Sig.|
>>> |               |Squares        |  |           |         |    |
>>> |---------------|---------------|--|-----------|---------|----|
>>> |Corrected Model|2.056a         |9 |.228       |61.462   |.000|
>>> |---------------|---------------|--|-----------|---------|----|
>>> |Intercept      |37.988         |1 |37.988     |10219.052|.000|
>>> |---------------|---------------|--|-----------|---------|----|
>>> |method         |.128           |1 |.128       |34.418   |.000|
>>> |---------------|---------------|--|-----------|---------|----|
>>> |aliquot        |1.743          |4 |.436       |117.219  |.000|
>>> |---------------|---------------|--|-----------|---------|----|
>>> |replicate      |.185           |4 |.046       |12.466   |.000|
>>> |---------------|---------------|--|-----------|---------|----|
>>> |Error          |.074           |20|.004       |         |    |
>>> |---------------|---------------|--|-----------|---------|----|
>>> |Total          |40.119         |30|           |         |    |
>>> |---------------|---------------|--|-----------|---------|----|
>>> |Corrected Total|2.131          |29|           |         |    |
>>> |---------------|---------------|--|-----------|---------|----|
>>> a. R Squared = .965 (Adjusted R Squared = .949)
>>>
>>> [method+aliquot+replicate ss sum to model ss]
>>>
>>>
>>> Estimated Marginal Means
>>>
>>> method
>>> Dependent Variable:lkt
>>> |------|----|----------|---------------------------|
>>> |method|Mean|Std. Error|99% Confidence Interval    |
>>> |      |    |          |                           |
>>> |      |    |          |---------------|-----------|
>>> |      |    |          |Lower Bound    |Upper Bound|
>>> |------|----|----------|---------------|-----------|
>>> |1.00  |.a  |.         |.              |.          |
>>> |------|----|----------|---------------|-----------|
>>> |2.00  |.a  |.         |.              |.          |
>>> |------|----|----------|---------------|-----------|
>>> a. This modified population marginal mean is not estimable.
>>>
>>>
>>>
>>> UNIANOVA lkt BY aliquot method replicate
>>>   /METHOD=SSTYPE(3)
>>>   /INTERCEPT=INCLUDE
>>>    /EMMEANS=TABLES(method)
>>>   /CRITERIA=ALPHA(.01)
>>>   /DESIGN=method aliquot replicate.
>>>
>>>
>>>
>>>
>>> Tests of Between-Subjects Effects
>>> Dependent Variable:lkt
>>> |---------------|---------------|--|-----------|---------|----|
>>> |Source         |Type III Sum   |df|Mean Square|F        |Sig.|
>>> |               |of Squares     |  |           |         |    |
>>> |---------------|---------------|--|-----------|---------|----|
>>> |Corrected Model|2.056a         |9 |.228       |61.462   |.000|
>>> |---------------|---------------|--|-----------|---------|----|
>>> |Intercept      |37.988         |1 |37.988     |10219.052|.000|
>>> |---------------|---------------|--|-----------|---------|----|
>>> |method         |.000           |0 |.          |.        |.   |
>>> |---------------|---------------|--|-----------|---------|----|
>>> |aliquot        |1.743          |4 |.436       |117.219  |.000|
>>> |---------------|---------------|--|-----------|---------|----|
>>> |replicate      |.185           |4 |.046       |12.466   |.000|
>>> |---------------|---------------|--|-----------|---------|----|
>>> |Error          |.074           |20|.004       |         |    |
>>> |---------------|---------------|--|-----------|---------|----|
>>> |Total          |40.119         |30|           |         |    |
>>> |---------------|---------------|--|-----------|---------|----|
>>> |Corrected Total|2.131          |29|           |         |    |
>>> |---------------|---------------|--|-----------|---------|----|
>>> a. R Squared = .965 (Adjusted R Squared = .949)
>>>
>>>
>>>
>>> Estimated Marginal Means
>>>
>>> method
>>> Dependent Variable:lkt
>>> |------|----|----------|---------------------------|
>>> |method|Mean|Std. Error|99% Confidence Interval    |
>>> |      |    |          |                           |
>>> |      |    |          |---------------|-----------|
>>> |      |    |          |Lower Bound    |Upper Bound|
>>> |------|----|----------|---------------|-----------|
>>> |1.00  |.a  |.         |.              |.          |
>>> |------|----|----------|---------------|-----------|
>>> |2.00  |.a  |.         |.              |.          |
>>> |------|----|----------|---------------|-----------|
>>> a. This modified population marginal mean is not estimable.
>>>
>>> [model ss does not equal sum of parts]
>>>
>>>
>>> Thanks for any comments
>>> Allan
>>>
>>>
>>>
>>>
>>>
>>> R Allan Reese
>>> Senior statistician, Cefas
>>> The Nothe, Weymouth DT4 8UB
>>>
>>> Tel: +44 (0)1305 206614 -direct
>>> Fax: +44 (0)1305 206601
>>>
>>>
www.cefas.co.uk
>>> ***********************************************************************************

>>>
>>> This email and any attachments are intended for the named recipient only.
>>> Its unauthorised use, distribution, disclosure, storage or copying is not
>>> permitted. If you have received it in error, please destroy all copies
>>> and notify the sender. In messages of a non-business nature, the views
>>> and opinions expressed are the author's own and do not necessarily
>>> reflect those of the organisation from which it is sent. All emails may
>>> be subject to monitoring.
>>>
>>> ***********************************************************************************
>>>
>>
>
>
> -----
> --
> Bruce Weaver
>
[hidden email]
> http://sites.google.com/a/lakeheadu.ca/bweaver/
>
> "When all else fails, RTFM."
>
> NOTE: My Hotmail account is not monitored regularly.
> To send me an e-mail, please use the address shown above.
>
> --
> View this message in context:
http://spssx-discussion.1045642.n5.nabble.com/Anova-SS1-vSS3-using-v-17-0-tp3412630p3413177.html
> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
>
[hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

T-test or nonparametric test? Confused.

Bridgette Portman
In reply to this post by Bruce Weaver
Hi everyone,

This is a rather elementary statistics question and I feel kind of stupid
asking it. But I've managed to thoroughly confuse myself. I hope someone
can help me out.

I've collected survey data from 260 respondents. As I'm analyzing
demographic information, I have noticed that the distribution of ages in
my sample is not normal. In fact, it is bimodal, with peaks around 20 and
60, and a trough around 40. This was due to my sampling method, not to any
intrinsic pattern in the population I was sampling from. I want to be able
to compare ages between various groups, such as men and women, in my
sample. But can I use a t-test, given the abnormal distribution? Should I
use a nonparametric test like Mann-Whitney U instead?

The reason I'm confused is that the bimodality is in my sample alone, do
to my sampling technique. The ages in the population as a whole, I'm sure,
has an underlying normal distribution. I am studying political activists,
and in order to get at them, I sampled from a) college student political
clubs, and b) actual political parties. The college kids tended to be
around 20, while the party people were 50+. I know one alternative would
be to just recode age into something like "below 40" and "above 40" but
I'd rather avoid doing that if I can.

Can anyone offer advice?

Thanks,
Bridgette

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: T-test or nonparametric test? Confused.

Bruce Weaver
Administrator
Several points here.

1. All t-tests have the form t = (statistic - parameter|H0) / SE(statistic).  The test will be pretty good if the sampling distribution of the statistic in the numerator is approximately normal.  As the sample size increases, the sampling distribution of the statistic converges on a normal distribution (almost regardless of population shape).

2. I said "approximately" in point 1, because as George Box said, normal distributions (and straight lines) do not exist in nature.  

3. The t-test would be an exact test only if the two populations were perfectly normal, and the two population variances exactly equal.  Since neither of those conditions will ever hold, the t-test on real data is an approximate test.  Therefore, the real question becomes whether the approximation is good enough to be "useful".  (I refer to another George Box quote here:  "All models are wrong. Some are useful.")

4. Normality (which never holds) applies to the POPULATIONS from which you sampled.

5. If you are going to assess normality via plots, you can't do it by looking at one plot--you need two plots, one for each group.  (Suppose the two populations were perfectly normal, but with different means.  In this case, you would most likely see a bimodal distribution if you made one plot.)

6. The t-test is quite robust to non-normality of populations.  If your library has it, take a look at Figure 12.2 in Statistical Methods in Education and Psychology , 3rd Edition (by Glass & Hopkins).  It shows that the t-test performs quite well under the following circumstances:

R/R – both populations rectangular; n1 = n2 = 5
S/S – both populations skewed; n1 = n2 = 15
N/S – one population normal, one skewed; n1 = n2 = 15
R/S – one population rectangular, one skewed; n1 = n2 = 15
L/L – both populations leptokurtic (i.e., tails thicker than the normal distribution); n1 = 5, n2 = 15
ES/ES – both populations extremely skewed in same direction; n1 = 5, n2 = 15
M/M – both populations multimodal; n1 = 5, n2 = 15
SP/SP – both populations spiked; n1 = 5, n2 = 15
T/T – both populations triangular; n1 = 5, n2 = 15

And for dichotomous populations with:
 P = .5, Q = .5, n = 11
 P = .6, Q = .4, n = 11
 P = .75, Q = .25, n = 11

HTH.

Bridgette Portman wrote
Hi everyone,

This is a rather elementary statistics question and I feel kind of stupid
asking it. But I've managed to thoroughly confuse myself. I hope someone
can help me out.

I've collected survey data from 260 respondents. As I'm analyzing
demographic information, I have noticed that the distribution of ages in
my sample is not normal. In fact, it is bimodal, with peaks around 20 and
60, and a trough around 40. This was due to my sampling method, not to any
intrinsic pattern in the population I was sampling from. I want to be able
to compare ages between various groups, such as men and women, in my
sample. But can I use a t-test, given the abnormal distribution? Should I
use a nonparametric test like Mann-Whitney U instead?

The reason I'm confused is that the bimodality is in my sample alone, do
to my sampling technique. The ages in the population as a whole, I'm sure,
has an underlying normal distribution. I am studying political activists,
and in order to get at them, I sampled from a) college student political
clubs, and b) actual political parties. The college kids tended to be
around 20, while the party people were 50+. I know one alternative would
be to just recode age into something like "below 40" and "above 40" but
I'd rather avoid doing that if I can.

Can anyone offer advice?

Thanks,
Bridgette

=====================
To manage your subscription to SPSSX-L, send a message to
LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

chi-square post-hoc tests

Bridgette Portman
I have another question.

I'm confused about how to perform post-hoc tests for chi-square
contingency tables larger than 2 x 2. I've been reading up on it in books
and on the internet, and there seem to be two different methods advised.
Some say to do multiple pairwise comparisons (2x2 tables) with a
Bonferroni correction. Others say to look at the standardized residuals.
I'm not sure which is the better way. Is there any easy way to perform
posthoc tests on contingency tables in SPSS?

Thanks,
Bridgette

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: T-test or nonparametric test? Confused.

John F Hall
In reply to this post by Bridgette Portman
Age is never normally distributed.  Looks like you need to split your data
file by sample source or create a new variable (if you haven't already done
so) eg SAMPLE with two categories 1 = 'Colleges' 2 = 'Parties'.  Afgter that
it's not quite clear what you're trying to do, but crosstabs can also help
if you group ages.


John Hall
[hidden email]
www.surveyresearch.weebly.com





-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Bridgette Portman
Sent: 08 March 2011 08:41
To: [hidden email]
Subject: T-test or nonparametric test? Confused.

Hi everyone,

This is a rather elementary statistics question and I feel kind of stupid
asking it. But I've managed to thoroughly confuse myself. I hope someone
can help me out.

I've collected survey data from 260 respondents. As I'm analyzing
demographic information, I have noticed that the distribution of ages in
my sample is not normal. In fact, it is bimodal, with peaks around 20 and
60, and a trough around 40. This was due to my sampling method, not to any
intrinsic pattern in the population I was sampling from. I want to be able
to compare ages between various groups, such as men and women, in my
sample. But can I use a t-test, given the abnormal distribution? Should I
use a nonparametric test like Mann-Whitney U instead?

The reason I'm confused is that the bimodality is in my sample alone, do
to my sampling technique. The ages in the population as a whole, I'm sure,
has an underlying normal distribution. I am studying political activists,
and in order to get at them, I sampled from a) college student political
clubs, and b) actual political parties. The college kids tended to be
around 20, while the party people were 50+. I know one alternative would
be to just recode age into something like "below 40" and "above 40" but
I'd rather avoid doing that if I can.

Can anyone offer advice?

Thanks,
Bridgette

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: chi-square post-hoc tests

Burleson,Joseph A.
In reply to this post by Bridgette Portman
If one of the elements remains as 2 levels (e.g., 2 X 3), use logistic regression, with the 2-level variable as the outcome. Then use appropriate a priori contrasts to disentangle the df (2 df in the case of the 3 level variable).

If none of the elements are 2 levels, then you need to consider a multinomial logistic regression.

Joe Burleson

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Bridgette Portman
Sent: Wednesday, March 09, 2011 1:00 AM
To: [hidden email]
Subject: chi-square post-hoc tests

I have another question.

I'm confused about how to perform post-hoc tests for chi-square
contingency tables larger than 2 x 2. I've been reading up on it in books
and on the internet, and there seem to be two different methods advised.
Some say to do multiple pairwise comparisons (2x2 tables) with a
Bonferroni correction. Others say to look at the standardized residuals.
I'm not sure which is the better way. Is there any easy way to perform
posthoc tests on contingency tables in SPSS?

Thanks,
Bridgette

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: chi-square post-hoc tests

Bridgette Portman
That seems like so much extra work. What about the "compare column
proportions" option under "z-tests" in Crosstabs --> Cells? Is anyone
familiar with using this? If I am interpreting it right, it allows for the
kind of pairwise comparisons I'm trying to do, with the option for a
Bonferroni adjustment to the alpha level.

Bridgette


> If one of the elements remains as 2 levels (e.g., 2 X 3), use logistic
> regression, with the 2-level variable as the outcome. Then use appropriate
> a priori contrasts to disentangle the df (2 df in the case of the 3 level
> variable).
>
> If none of the elements are 2 levels, then you need to consider a
> multinomial logistic regression.
>
> Joe Burleson
>
> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
> Bridgette Portman
> Sent: Wednesday, March 09, 2011 1:00 AM
> To: [hidden email]
> Subject: chi-square post-hoc tests
>
> I have another question.
>
> I'm confused about how to perform post-hoc tests for chi-square
> contingency tables larger than 2 x 2. I've been reading up on it in books
> and on the internet, and there seem to be two different methods advised.
> Some say to do multiple pairwise comparisons (2x2 tables) with a
> Bonferroni correction. Others say to look at the standardized residuals.
> I'm not sure which is the better way. Is there any easy way to perform
> posthoc tests on contingency tables in SPSS?
>
> Thanks,
> Bridgette
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
>

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Tom
Reply | Threaded
Open this post in threaded view
|

Automatische Antwort: chi-square post-hoc tests

Tom
Besten Dank für Ihr Mail. Ich bin abwesend und am Freitag, 11.3., wieder im Büro. Ich werde mich dann so schnell wie möglich melden. Mit freundlichen Grüssen Thomas Balmer

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: chi-square post-hoc tests

Burleson,Joseph A.
In reply to this post by Bridgette Portman
True, it is a bit of extra work. But the Bonferonni adjustments are extremely strong, extracting a large penalty. Also, by picking apart the 3 X 2 into its 3 separate 2 X 2 components, you are only using 2/3 of the data each time. Also only 2 of the 3 tests are orthogonal potentially.

So it fairly primitive, in my mind to do the above with chi-square, when logistic regression which has been around for quite awhile could just as easily be used.

By the way, it does not take longer to run a logistic regression in SPSS than it does a chi-square, and the PASTE commands are equally simple.

Joe Burleson


-----Original Message-----
From: Bridgette Portman [mailto:[hidden email]]
Sent: Wednesday, March 09, 2011 2:58 PM
To: Burleson,Joseph A.
Cc: [hidden email]
Subject: Re: chi-square post-hoc tests

That seems like so much extra work. What about the "compare column
proportions" option under "z-tests" in Crosstabs --> Cells? Is anyone
familiar with using this? If I am interpreting it right, it allows for the
kind of pairwise comparisons I'm trying to do, with the option for a
Bonferroni adjustment to the alpha level.

Bridgette


> If one of the elements remains as 2 levels (e.g., 2 X 3), use logistic
> regression, with the 2-level variable as the outcome. Then use appropriate
> a priori contrasts to disentangle the df (2 df in the case of the 3 level
> variable).
>
> If none of the elements are 2 levels, then you need to consider a
> multinomial logistic regression.
>
> Joe Burleson
>
> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
> Bridgette Portman
> Sent: Wednesday, March 09, 2011 1:00 AM
> To: [hidden email]
> Subject: chi-square post-hoc tests
>
> I have another question.
>
> I'm confused about how to perform post-hoc tests for chi-square
> contingency tables larger than 2 x 2. I've been reading up on it in books
> and on the internet, and there seem to be two different methods advised.
> Some say to do multiple pairwise comparisons (2x2 tables) with a
> Bonferroni correction. Others say to look at the standardized residuals.
> I'm not sure which is the better way. Is there any easy way to perform
> posthoc tests on contingency tables in SPSS?
>
> Thanks,
> Bridgette
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
>

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: chi-square post-hoc tests

Bruce Weaver
Administrator
In reply to this post by Bridgette Portman
This is the second or third time I've seen someone mention z-tests under CROSSTABS.  I'm not familiar with that--is it new in v19?

Thanks,
Bruce


Bridgette Portman wrote
That seems like so much extra work. What about the "compare column
proportions" option under "z-tests" in Crosstabs --> Cells? Is anyone
familiar with using this? If I am interpreting it right, it allows for the
kind of pairwise comparisons I'm trying to do, with the option for a
Bonferroni adjustment to the alpha level.

Bridgette


> If one of the elements remains as 2 levels (e.g., 2 X 3), use logistic
> regression, with the 2-level variable as the outcome. Then use appropriate
> a priori contrasts to disentangle the df (2 df in the case of the 3 level
> variable).
>
> If none of the elements are 2 levels, then you need to consider a
> multinomial logistic regression.
>
> Joe Burleson
>
> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:SPSSX-L@LISTSERV.UGA.EDU] On Behalf Of
> Bridgette Portman
> Sent: Wednesday, March 09, 2011 1:00 AM
> To: SPSSX-L@LISTSERV.UGA.EDU
> Subject: chi-square post-hoc tests
>
> I have another question.
>
> I'm confused about how to perform post-hoc tests for chi-square
> contingency tables larger than 2 x 2. I've been reading up on it in books
> and on the internet, and there seem to be two different methods advised.
> Some say to do multiple pairwise comparisons (2x2 tables) with a
> Bonferroni correction. Others say to look at the standardized residuals.
> I'm not sure which is the better way. Is there any easy way to perform
> posthoc tests on contingency tables in SPSS?
>
> Thanks,
> Bridgette
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
>

=====================
To manage your subscription to SPSSX-L, send a message to
LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: chi-square post-hoc tests

Bridgette Portman
Ah, I think I got it...It's an option if you have the SPSS Custom Tables
module. That came with the version of 19 (Grad Pack Premium) that I
recently bought.

Anyway, it's an option under "Cells" in CROSSTABS. You can select "Compare
column proportions" with a further option to use a Bonferroni correction.
When it returns the crosstabs table, it indicates which column percentages
are significantly different. This seems like the sort of thing I'm trying
to do.

Bridgette


> This is the second or third time I've seen someone mention z-tests under
> CROSSTABS.  I'm not familiar with that--is it new in v19?
>
> Thanks,
> Bruce
>
>
>
> Bridgette Portman wrote:
>>
>> That seems like so much extra work. What about the "compare column
>> proportions" option under "z-tests" in Crosstabs --> Cells? Is anyone
>> familiar with using this? If I am interpreting it right, it allows for
>> the
>> kind of pairwise comparisons I'm trying to do, with the option for a
>> Bonferroni adjustment to the alpha level.
>>
>> Bridgette
>>
>>
>>> If one of the elements remains as 2 levels (e.g., 2 X 3), use logistic
>>> regression, with the 2-level variable as the outcome. Then use
>>> appropriate
>>> a priori contrasts to disentangle the df (2 df in the case of the 3
>>> level
>>> variable).
>>>
>>> If none of the elements are 2 levels, then you need to consider a
>>> multinomial logistic regression.
>>>
>>> Joe Burleson
>>>
>>> -----Original Message-----
>>> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf
>>> Of
>>> Bridgette Portman
>>> Sent: Wednesday, March 09, 2011 1:00 AM
>>> To: [hidden email]
>>> Subject: chi-square post-hoc tests
>>>
>>> I have another question.
>>>
>>> I'm confused about how to perform post-hoc tests for chi-square
>>> contingency tables larger than 2 x 2. I've been reading up on it in
>>> books
>>> and on the internet, and there seem to be two different methods
>>> advised.
>>> Some say to do multiple pairwise comparisons (2x2 tables) with a
>>> Bonferroni correction. Others say to look at the standardized
>>> residuals.
>>> I'm not sure which is the better way. Is there any easy way to perform
>>> posthoc tests on contingency tables in SPSS?
>>>
>>> Thanks,
>>> Bridgette
>>>
>>> =====================
>>> To manage your subscription to SPSSX-L, send a message to
>>> [hidden email] (not to SPSSX-L), with no body text except
>>> the
>>> command. To leave the list, send the command
>>> SIGNOFF SPSSX-L
>>> For a list of commands to manage subscriptions, send the command
>>> INFO REFCARD
>>>
>>> =====================
>>> To manage your subscription to SPSSX-L, send a message to
>>> [hidden email] (not to SPSSX-L), with no body text except
>>> the
>>> command. To leave the list, send the command
>>> SIGNOFF SPSSX-L
>>> For a list of commands to manage subscriptions, send the command
>>> INFO REFCARD
>>>
>>>
>>
>> =====================
>> To manage your subscription to SPSSX-L, send a message to
>> [hidden email] (not to SPSSX-L), with no body text except the
>> command. To leave the list, send the command
>> SIGNOFF SPSSX-L
>> For a list of commands to manage subscriptions, send the command
>> INFO REFCARD
>>
>
>
> -----
> --
> Bruce Weaver
> [hidden email]
> http://sites.google.com/a/lakeheadu.ca/bweaver/
>
> "When all else fails, RTFM."
>
> NOTE: My Hotmail account is not monitored regularly.
> To send me an e-mail, please use the address shown above.
>
> --
> View this message in context:
> http://spssx-discussion.1045642.n5.nabble.com/Anova-SS1-vSS3-using-v-17-0-tp3412630p3420116.html
> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
>

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: chi-square post-hoc tests

Ryan
In reply to this post by Bruce Weaver
Hi Bruce.

Click on the link below and go to page 27:

http://support.spss.com/productsext/statistics/documentation/19/client/User%20Manuals/English/IBM%20SPSS%20Statistics%20Base%2019.pdf

Ryan

On Wed, Mar 9, 2011 at 6:09 PM, Bruce Weaver <[hidden email]> wrote:

> This is the second or third time I've seen someone mention z-tests under
> CROSSTABS.  I'm not familiar with that--is it new in v19?
>
> Thanks,
> Bruce
>
>
>
> Bridgette Portman wrote:
>>
>> That seems like so much extra work. What about the "compare column
>> proportions" option under "z-tests" in Crosstabs --> Cells? Is anyone
>> familiar with using this? If I am interpreting it right, it allows for the
>> kind of pairwise comparisons I'm trying to do, with the option for a
>> Bonferroni adjustment to the alpha level.
>>
>> Bridgette
>>
>>
>>> If one of the elements remains as 2 levels (e.g., 2 X 3), use logistic
>>> regression, with the 2-level variable as the outcome. Then use
>>> appropriate
>>> a priori contrasts to disentangle the df (2 df in the case of the 3 level
>>> variable).
>>>
>>> If none of the elements are 2 levels, then you need to consider a
>>> multinomial logistic regression.
>>>
>>> Joe Burleson
>>>
>>> -----Original Message-----
>>> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
>>> Bridgette Portman
>>> Sent: Wednesday, March 09, 2011 1:00 AM
>>> To: [hidden email]
>>> Subject: chi-square post-hoc tests
>>>
>>> I have another question.
>>>
>>> I'm confused about how to perform post-hoc tests for chi-square
>>> contingency tables larger than 2 x 2. I've been reading up on it in books
>>> and on the internet, and there seem to be two different methods advised.
>>> Some say to do multiple pairwise comparisons (2x2 tables) with a
>>> Bonferroni correction. Others say to look at the standardized residuals.
>>> I'm not sure which is the better way. Is there any easy way to perform
>>> posthoc tests on contingency tables in SPSS?
>>>
>>> Thanks,
>>> Bridgette
>>>
>>> =====================
>>> To manage your subscription to SPSSX-L, send a message to
>>> [hidden email] (not to SPSSX-L), with no body text except the
>>> command. To leave the list, send the command
>>> SIGNOFF SPSSX-L
>>> For a list of commands to manage subscriptions, send the command
>>> INFO REFCARD
>>>
>>> =====================
>>> To manage your subscription to SPSSX-L, send a message to
>>> [hidden email] (not to SPSSX-L), with no body text except the
>>> command. To leave the list, send the command
>>> SIGNOFF SPSSX-L
>>> For a list of commands to manage subscriptions, send the command
>>> INFO REFCARD
>>>
>>>
>>
>> =====================
>> To manage your subscription to SPSSX-L, send a message to
>> [hidden email] (not to SPSSX-L), with no body text except the
>> command. To leave the list, send the command
>> SIGNOFF SPSSX-L
>> For a list of commands to manage subscriptions, send the command
>> INFO REFCARD
>>
>
>
> -----
> --
> Bruce Weaver
> [hidden email]
> http://sites.google.com/a/lakeheadu.ca/bweaver/
>
> "When all else fails, RTFM."
>
> NOTE: My Hotmail account is not monitored regularly.
> To send me an e-mail, please use the address shown above.
>
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Anova-SS1-vSS3-using-v-17-0-tp3412630p3420116.html
> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: chi-square post-hoc tests

Bruce Weaver
Administrator
Thanks Ryan.  That corner of the dialog is empty for me (in v18)--and Custom Tables is included with our license.  So I gather this is something new in v19.

Cheers,
Bruce


R B wrote
Hi Bruce.

Click on the link below and go to page 27:

http://support.spss.com/productsext/statistics/documentation/19/client/User%20Manuals/English/IBM%20SPSS%20Statistics%20Base%2019.pdf

Ryan

On Wed, Mar 9, 2011 at 6:09 PM, Bruce Weaver <bruce.weaver@hotmail.com> wrote:
> This is the second or third time I've seen someone mention z-tests under
> CROSSTABS.  I'm not familiar with that--is it new in v19?
>
> Thanks,
> Bruce
>
>
>
> Bridgette Portman wrote:
>>
>> That seems like so much extra work. What about the "compare column
>> proportions" option under "z-tests" in Crosstabs --> Cells? Is anyone
>> familiar with using this? If I am interpreting it right, it allows for the
>> kind of pairwise comparisons I'm trying to do, with the option for a
>> Bonferroni adjustment to the alpha level.
>>
>> Bridgette
>>
>>
>>> If one of the elements remains as 2 levels (e.g., 2 X 3), use logistic
>>> regression, with the 2-level variable as the outcome. Then use
>>> appropriate
>>> a priori contrasts to disentangle the df (2 df in the case of the 3 level
>>> variable).
>>>
>>> If none of the elements are 2 levels, then you need to consider a
>>> multinomial logistic regression.
>>>
>>> Joe Burleson
>>>
>>> -----Original Message-----
>>> From: SPSSX(r) Discussion [mailto:SPSSX-L@LISTSERV.UGA.EDU] On Behalf Of
>>> Bridgette Portman
>>> Sent: Wednesday, March 09, 2011 1:00 AM
>>> To: SPSSX-L@LISTSERV.UGA.EDU
>>> Subject: chi-square post-hoc tests
>>>
>>> I have another question.
>>>
>>> I'm confused about how to perform post-hoc tests for chi-square
>>> contingency tables larger than 2 x 2. I've been reading up on it in books
>>> and on the internet, and there seem to be two different methods advised.
>>> Some say to do multiple pairwise comparisons (2x2 tables) with a
>>> Bonferroni correction. Others say to look at the standardized residuals.
>>> I'm not sure which is the better way. Is there any easy way to perform
>>> posthoc tests on contingency tables in SPSS?
>>>
>>> Thanks,
>>> Bridgette
>>>
>>> =====================
>>> To manage your subscription to SPSSX-L, send a message to
>>> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
>>> command. To leave the list, send the command
>>> SIGNOFF SPSSX-L
>>> For a list of commands to manage subscriptions, send the command
>>> INFO REFCARD
>>>
>>> =====================
>>> To manage your subscription to SPSSX-L, send a message to
>>> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
>>> command. To leave the list, send the command
>>> SIGNOFF SPSSX-L
>>> For a list of commands to manage subscriptions, send the command
>>> INFO REFCARD
>>>
>>>
>>
>> =====================
>> To manage your subscription to SPSSX-L, send a message to
>> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
>> command. To leave the list, send the command
>> SIGNOFF SPSSX-L
>> For a list of commands to manage subscriptions, send the command
>> INFO REFCARD
>>
>
>
> -----
> --
> Bruce Weaver
> bweaver@lakeheadu.ca
> http://sites.google.com/a/lakeheadu.ca/bweaver/
>
> "When all else fails, RTFM."
>
> NOTE: My Hotmail account is not monitored regularly.
> To send me an e-mail, please use the address shown above.
>
> --
> View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Anova-SS1-vSS3-using-v-17-0-tp3412630p3420116.html
> Sent from the SPSSX Discussion mailing list archive at Nabble.com.
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>

=====================
To manage your subscription to SPSSX-L, send a message to
LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).