Apologies for the question. It is Statistics related, but not directly SPSS. Post Hoc Tests (Multiple comparison) in one-way ANOVA are plenty including Scheffe, Bonferonni, LSD, Tukey, and many more. I am looking for books, articles, or reading materials that can be used as guide on when to use a test over the other tests. I tried to search in google but I was not successful. You might encounter some reading materials, I appreciate if you will share. Thank you. Lema |
Lema, Perhaps start with this paper. It provides a very good analysis of most of the post hoc tests including some which can be used when all the requirements for an ANOVA are not met, e.g., equality of variances. Have fun! Brian Brian G. Dates, M.A. Consultant in Program Evaluation, Research, and Statistics 248-229-2865 email:[hidden email] On Sun, Jun 28, 2020 at 5:18 AM 3J LEMA <[hidden email]> wrote:
|
Brian has cited a paper that seems very fine, on my quick perusal.
NIH has seen fit to preserve it and reproduce it - that's a good
endorsement, too.
I will mention that most of these tests /come with/ the specification
of when they, in particular, they are suitable.
What is missing? - a recommendation - "Use planned comparisons
instead." The way you get power and specificity in doing tests is to
do FEW tests. Look at your hypotheses: Which ones matter most?
Is there a hierarchy? Test a factor score; if it is highly significant, that
is the "overall test" that allows you to test the (less reliable) items that
comprise it. Fifty possible item-tests might be reduced to two or
three overall, primary tests that are used to /describe/ and particularize
the overall effect noted for their factor score.
I learned that from Jerry Hogarty, early in my career. "Hospitalization,
or imminent re-hospitalization" was the single criterion he used for
measuring success of drug vs. placebo in a major trial on schizophrenic
outpatients. The dozen rating scales, with hundreds of total items, gave
details about the success.
In 35-years doing statistics for clinical trials, I occasionally found Bonferroni
useful, and LSD. I successfully avoided most requests to apply post-hoc
tests. I pushed "planned contrasts" for testing. "Effect size" for further
promoting the importance of a finding.
In one study, we wrote into the protocol that we intended to use a one-tailed,
5% test on one particular, important interaction -- because the sample size
would give too little power for any more conservative test. We were able to
publish that result (it came out significant) by pointing to the fact that this
was not cherry-picking, that we had written that test into the grant application.
--
Rich Ulrich
From: SPSSX(r) Discussion <[hidden email]> on behalf of Brian Dates <[hidden email]>
Sent: Sunday, June 28, 2020 12:24 PM To: [hidden email] <[hidden email]> Subject: Re: Post Hoc Test in ANOVA Lema,
Perhaps start with this paper. It provides a very good analysis of most of the post hoc tests including some which can be used when all the requirements for an ANOVA are not met, e.g., equality
of variances.
Have fun!
Brian
Brian G. Dates, M.A.
Consultant in Program Evaluation, Research, and Statistics
248-229-2865
email:[hidden email]
On Sun, Jun 28, 2020 at 5:18 AM 3J LEMA <[hidden email]> wrote:
|
In reply to this post by Brian Dates
As a complement to the article Brian cites, there is an extension command, STATS PADJUST, installable from Extensions > Extension Hub, that provides six methods for adjusting significance levels in the multiple testing scenario. On Sun, Jun 28, 2020 at 10:24 AM Brian Dates <[hidden email]> wrote:
|
Administrator
|
In reply to this post by Brian Dates
Like Rich, I've only taken a quick look at that KJA article, but it appears
to cover all of the tests & procedures that come up in typical applied statistics courses (in Psychology, at least). I'll point you to a couple other articles that are among the most thoughtful pieces I've read on the topic of "multiplicity". They're in The Lancet. I hope you have institutional access. https://pubmed.ncbi.nlm.nih.gov/15866314/ https://pubmed.ncbi.nlm.nih.gov/15885299/ Cheers, Bruce Brian Dates wrote > Lema, > > Perhaps start with this paper. It provides a very good analysis of most of > the post hoc tests including some which can be used when all the > requirements for an ANOVA are not met, e.g., equality of variances. > > https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6193594/ > > Have fun! > > Brian > > > Brian G. Dates, M.A. > Consultant in Program Evaluation, Research, and Statistics > 248-229-2865 > email: > briandates@ > Website: https://sites.google.com/view/briandates/home > > > On Sun, Jun 28, 2020 at 5:18 AM 3J LEMA < > 3jlema@ >> wrote: > >> Apologies for the question. It is Statistics related, but not directly >> SPSS. >> Post Hoc Tests (Multiple comparison) in one-way ANOVA are plenty >> including >> Scheffe, Bonferonni, LSD, Tukey, and many more. I am looking for books, >> articles, or reading materials that can be used as guide on when to use a >> test over the other tests. >> >> I tried to search in google but I was not successful. You might encounter >> some reading materials, I appreciate if you will share. >> >> Thank you. >> >> Lema >> >> ===================== To manage your subscription to SPSSX-L, send a >> message to > LISTSERV@.UGA > (not to SPSSX-L), with no body text >> except the command. To leave the list, send the command SIGNOFF SPSSX-L >> For >> a list of commands to manage subscriptions, send the command INFO REFCARD > > ===================== > To manage your subscription to SPSSX-L, send a message to > LISTSERV@.UGA > (not to SPSSX-L), with no body text except the > command. To leave the list, send the command > SIGNOFF SPSSX-L > For a list of commands to manage subscriptions, send the command > INFO REFCARD ----- -- Bruce Weaver [hidden email] http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." NOTE: My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. -- Sent from: http://spssx-discussion.1045642.n5.nabble.com/ ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
--
Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." PLEASE NOTE THE FOLLOWING: 1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. 2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/). |
In reply to this post by Brian Dates
In addition to the paper Brian Dates links to below, there is a similar paper on PubMed but it uses an alternative method of presentation of the material; here is the link:
Kim, H.-Y., 2015. Statistical notes for clinical
researchers:post-hoc multiple comparisons. Restorative Dentistry &
Endodontics.. doi:10.5395/rde.2015.40.2.172 NOTE: check the suggested related articles on the right. The above reference appears to rely on the Keppel & Wickens text which has been a popular text in psychological statistics; the full reference is:
Keppel G, Wickens TD. (2004). Design and analysis: a researcher's handbook. (4th ed;
pp. 111–130. New Jersey: Pearson Education Inc. . [Google Scholar]
I would also suggest Roger Kirk's text, specifically Chapter 5.Multiple Comparison Tests. His Table 5.1-1 provides one way to organize the procedures into meaningful groups. See: Kirk, R.E. (2013). Experimental Design. (4th ed). Thousand Oaks, CA: Sage. Roger has also written a companion volume (in ms./docx format) on how to do the analyses in the 2013 text in R. A copy is available on his ResearchGate website; see: Maxwell, Delaney, & Kelley's text also provides good coverage though in a somewhat different fashion (everyone presents this material in their own idiosyncratic fashion). See: One really needs to read the entire text to appreciate the larger contexts and the issues that one needs to be aware of (as Rich Ulrich points out). One other issue to keep in mind is that some researchers have favorite multiple comparisons procedures to use regardless of how appropriate; communities of researchers also show similar biases. They also may not know some of the newer procedures. This affects the choice of which test to report because if a reviewer(s) is(are) not familiar with a particular test, one may have to provide justifications and/or show that the same result obtains with other tests. -Mike Palij New York University On Sun, Jun 28, 2020 at 12:24 PM Brian Dates <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |