Login  Register

Re: "significantly" not significant

Posted by Dominic Lusinchi on Aug 28, 2006; 1:09am
URL: http://spssx-discussion.165.s1.nabble.com/significantly-not-significant-tp1070571p1070586.html

Rohan,

As you say most statistical tests to detect differences are setup to assess
well...whether a significant difference between two (or more) groups exists.
Significant here means that what is observed is unlikely to have been a
result of chance (a coincidence to use the vernacular).

If no difference has been found (i.e. not significant) this does not "prove"
that they are the same, but only that we have insufficient evidence to state
that one is (significantly) different (statistically speaking) from the
other. Hence, given the lack of evidence we can continue our work under the
assumption of no difference or change.

The Shapiro-Wilks test and the Levene tests, for example, are test where
when we cannot reject the null hypothesis (normal distribution, equal
variances, respectively), we proceed with our analysis under the assumption
of normality or homoscedasticity, respectively.

If the p-value of your test is p=.95, this just means that the probability
of observing the difference under the null hypothesis is very high (this
observed difference is a very common occurrence under the null hypothesis
(of no difference)). In other words, if we were to take repeated samples of
size n from two populations with the same mean, the probability of obtaining
a difference as large as or larger than the one observed is .95.

Hope this is not too confusing, and helps.

Dominic Lusinchi
Statistician
Far West Research
Statistical Consulting
San Francisco, California
415-664-3032
www.farwestresearch.com
-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Rohan Lulham
Sent: Saturday, August 26, 2006 11:21 PM
To: [hidden email]
Subject: "significantly" not significant

Hello

This maybe be a naive question, but was wondering how to test if the
difference between to means are significantly similar (e.g. "significantly"
not significant differences). For example being able to say with 95%
confidence that the two means are similar or the same based on the results
of a test. It is very basic question, but rarely talked about were most
research traditions are predominantly looking for differences.

Can the probability value of say an ANOVA test be used so that a p value
= .95 would indicate 95% confidence they are similar? I am thinking due to
some underlying statistical theory to the approach this is not appropriate,
but am not quite sure.

Is there another way to set up an anlysis that can test for similarity
between two means, or distributions?

Thanks for time

Rohan