Tests of "significance"

classic Classic list List threaded Threaded
31 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Tests of "significance"

Hector Maletta
Michael,
I apologize if I misread your meaning. However, the whole point of
significance is about the relationship between sample and population. When
you observe a correlation or association in a sample, then you know there is
a correlation or association in the sample: for that you do not need a
significance test. Even if the observed sample correlation or association is
small, that is the observed result in the sample, and that is that: if you
observed a sample correlation of 0.02, then you observed a sample
correlation of 0.02. Your sample data are (weakly) correlated. Significance
analysis has nothing to do with it.
Now, if you consider your sample as one of many possible samples that can be
drawn from the relevant population, you may ask whether you can say anything
about the population based on your sample results. This is the question
addressed by significance tests: how significant are your sample results to
infer anything about the (unknown) population values. Your only clue is the
Large Numbers Theorem, stating that as sample size grows larger, the results
of many random samples tend to fall around the population value, approaching
the Gauss normal distribution with a mean equal to the population value, and
also with diminishing sampling error (i.e. less variance in the distribution
of samples around the population value) as sample size grows larger.
Now, armed with this knowledge, you can report a sample result (from a
sample of size N) adding that your have an X % confidence say, 95%) that the
population value is within a certain distance of your sample result, but
with the ever-present danger that it lies somewhere else, a danger for which
a complementary probability of 5% exists.
If your sample results are strong (say r=0.80) you would not need a very
large sample to achieve 95% significance (i.e. to be 95% confident that the
population correlation is ABOVE ZERO). You will not know whether the
population correlation is 0.02, 0.10, 0.30 or 0.80, you will only know there
is some 95% probability it is above zero, and a complementary 5% chance that
it is at or below zero. With a smaller sample, you would not be sure even of
that, and will have to settle for a lower significance level (90%? 80%) or
abandon your sample to start again from scratch.
If your observed sample correlation was very weak, say r=0.02, and your
sample was relatively small, say N=500, you could not achieve 95%
significance (not even 90%). The probability of population r=0 is above 5%
or 10%. But if you increase your sample to 50 million people (say, if you
work with the US census database, or a 50 million sample of the US census)
you may find some 0.02 correlations that are statistically significant. Of
course, they will be still weak, but you are statistically confident they
reflect a true population correlation and not a quirk of your sample.
Instead, your colleague with a sample of 200 cases may not be confident that
an r=0.70 is not a sample fluke, even if it is strong.
You did not say anything actually wrong, and I apologize again for giving
that impression. But I think it is extremely essential to distinguish
between significance testing (which assesses the probability of
correspondence between sample and population) and substantive analysis
(which assesses the relationship between variables for substantive
purposes).
Hector
-----Original Message-----
From: Granaas, Michael [mailto:[hidden email]]
Sent: 09 April 2008 12:39
To: Hector Maletta; [hidden email]
Subject: RE: Tests of "significance"


??????

See my comments below

Michael
****************************************************
Michael Granaas             [hidden email]
Assoc. Prof.                Phone: 605 677 5295
Dept. of Psychology         FAX:  605 677 3195
University of South Dakota
414 E. Clark St.
Vermillion, SD 57069
*****************************************************




>-----Original Message-----
>From: SPSSX(r) Discussion on behalf of Hector Maletta
>Sent: Tue 4/8/08 9:53 PM
>To: [hidden email]
>Subject: Re: Tests of "significance"

>It is nice that you thank everybody, Bob, but Michael Granaas opinion is
not
>right, for several reasons:

I think you are the one who is wrong Hector.

>1. The original question was not about correlation but about chi square,
>which concerns the difference between observed frequencies and those
>expected in case of randomness or independence.

Huh?  If the test of independence fails what does that mean?  It means that
the observed frequencies are correlated.  For example let's say that we are
looking at gender and political party affiliation in the U.S.  A chi-square
test of independence is rejected indicating that party affiliation is
associated (correlated) with gender.

If you have a strong preference for "association" rather than "correlation"
I have no objection.  But either way we are talking about a statistical test
that helps us determine whether or not an association exists.

>2. Even in the case of evaluating the significance of a correlation, the
>question of significance is not about the existence of correlation, but
>whether you (based on the correlation observed in a sample of a certain
>size) can infer --with a given degree of confidence-- that some nonzero
>correlation exists in the population.

I certainly don't remember saying anything different, except for not
explicitly stating that the conclusions are about the population and not
talking about a "given degree of confidence" which you expand on below.  If
you wish to misread my comments as limited to samples I suppose that I will
have to be more explicit in the future.



>To see why this is different imagine
>the following situations:
>(a) Your sample shows a respectable correlation, say r=0.40, but your
sample
>is very small and your significance level is pretty high (99%), so you
>cannot be 99% confident that the actual population correlation is not zero.
>(b) The observed sample correlation is very small (say r=0.02) but your
>sample is very large (several million cases), so you can say with 99%
>confidence that a nonzero correlation exists in the population. If you
lower
>your desired significance level, say to 95%, you can be able to say the
same
>with a much smaller sample, perhaps tens of thousands.

Bigger sample sizes increase power and allow you to detect smaller effects.
Okay.  I don't see how you felt that I said anything contradictory to that
conclusion.

On the other hand I am not at all sure what you are talking about when you
discuss a significance level of 99%.  Does that mean that you have a p-value
of ~.01?

If you have a p-value = .01 with a sample of n = 25 and again with a sample
of n=25,000,000 the risk of a type I error is identical and the strength of
your certainty as to the existence of a correlation is not changed at all.

On the other hand, a p-value of ~.99 is much more impressive with a very
large sample than with a very small sample.

>In either case, you can commit two kinds of errors:
>(i) False positives: You may conclude a nonzero correlation exists in the
>population, when none actually exists.
>(ii) False negatives: You may conclude that you are not able to discard the
>possibility of a zero population correlation, when the population
>correlation is actually zero.

And this is relevant to the current question how?  The gentleman asked how
to explain a significant result in plain English.

>Also in either case, rejecting the null hypothesis is not equivalent to
>proving the truth of the research hypothesis (other research hypotheses may
>be true instead of the one you are after). It is best to think of your
>conclusions in a cautious negative phrasing: "I am not able to discard the
>null hypothesis that no correlation exists in the population", or "I am not
>able to discard the hypothesis that some nonzero correlation exists in the
>population", promptly adding that both these statements have in turn a
>certain probability of being in error.
>Statistics is a course in humility.

Huh?
If you have failed to reject these statements make some sense.  But if you
have rejected then you can certainly, tentatively, conclude that there is
evidence of an association.

Michael


>Hector

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Bob
Schacht
Sent: 08 April 2008 22:11
To: [hidden email]
Subject: Re: Tests of "significance"

At 06:00 AM 4/8/2008, Granaas, Michael wrote:

>I apologize if this is a duplicate, but I didn't see my earlier response
>show up on the list and I received no confirmation.
>
>It seems to me that if the test of independence is being rejected the
>"plain English" explanation is that responses to items and outcomes are
>correlated.
>
>It is likely wise to compute a phi-coefficient so that size of the
>correlation can included in the description.  E.g., responses to item 7
>were very slightly correlated with outcome while responses to item 12 were
>strongly correlated with outcome.
>
>Michael


Thanks to MIchael, Hector, Art, Jon, & Paul for your interesting and
helpful replies! I like Michael's suggestion to use the word "correlated,"
which seems to be widely understood, if often confused with causation. I
also note how hard it is for most of us to abandon jargon and confine
ourselves to common English. Will the following pass muster?

>Statistically significant results.

"For this question, there is at least a 95% chance that participant
satisfaction and employment outcome are correlated."

>Almost statistically significant

"For this question, there is a 91% chance that participant satisfaction and
employment outcome are correlated. However, this falls short of the 95%
level usually required for statistical significance."

>Not statistically significant

"For this question, it does not appear that participant satisfaction and
employment outcome are correlated."

Thanks,
Bob

Robert M. Schacht, Ph.D. <[hidden email]>
Pacific Basin Rehabilitation Research & Training Center
1268 Young Street, Suite #204
Research Center, University of Hawaii
Honolulu, HI 96814

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Tests of "significance"

Bob Schacht-3
In reply to this post by Hector Maletta
At 04:53 PM 4/8/2008, Hector Maletta wrote:
>It is nice that you thank everybody, Bob, but Michael Granaas opinion is not
>right, for several reasons:
>1. The original question was not about correlation but about chi square,
>which concerns the difference between observed frequencies and those
>expected in case of randomness or independence.

Hector,
Thank you for your detailed response. You are helping me to see how
difficult it is for statisticians to  communicate with the "man on the
street."
I responded favorably to Michael Granaas' suggestion because the man on the
street may recognize what "correlate" means, in a general sense, without
the kind of technical specificity that you (and most other statisticians)
prefer. Because of your objection, I will shift my common-sense statement
from this:

>Statistically significant results.

"For this question, there is at least a 95% chance that participant
satisfaction and employment outcome are correlated."

to this:

"For this question, there is at least a 95% chance that participant
satisfaction and employment outcome are related."

"Related" is a non-statistical term that is not identified with any
particular statistical procedure. Is this acceptable?

>2. Even in the case of evaluating the significance of a correlation, the
>question of significance is not about the existence of correlation, but
>whether you (based on the correlation observed in a sample of a certain
>size) can infer --with a given degree of confidence-- that some nonzero
>correlation exists in the population. . . .

I would suggest that the man on the street cares not a fig for this
distinction, which he would probably regard as "pedantic," and we cannot
insist that he make a distinction that he does not care about and does not
understand. However, you end up with an important distinction that he might
understand:

>In either case, you can commit two kinds of errors:
>(i) False positives: You may conclude a nonzero correlation exists in the
>population, when none actually exists.

Or, more generally, that a relationship exists between the variables in the
population, when there actually is none.

>(ii) False negatives: You may conclude that you are not able to discard
>the possibility of a zero population correlation, when the population
>correlation is actually zero.

Or, more generally that a relationship does not exist when there is one,
although doing an equivalent linguistic transformation on your wording
eludes me. Your wording would be regarded by our man on the street as
hopeless mumbo jumbo.

>Also in either case, rejecting the null hypothesis is not equivalent to
>proving the truth of the research hypothesis (other research hypotheses may
>be true instead of the one you are after).

Exactly. And this is the most important but difficult point to get across:
that a simple positive is not necessarily the same as a double negative. I
think we can cover this by using the language of chance, e.g., "There is at
least a 95% chance that..."

>It is best to think of your
>conclusions in a cautious negative phrasing: "I am not able to discard the
>null hypothesis that no correlation exists in the population", or "I am not
>able to discard the hypothesis that some nonzero correlation exists in the
>population", promptly adding that both these statements have in turn a
>certain probability of being in error.

Our man on the street would, I'm afraid, regard both formulations as
incomprehensible mumbo jumbo.

>Statistics is a course in humility.

That it is. I began my discourse on this subject with my assistant with the
observation that "statisticians are humble folk..."
But, humble or not, we must find a way to accurately communicate with the
man on the street that he can comprehend, rather than retreating behind a
wall of  language that is, to him, incomprehensible.

Bob




>-----Original Message-----
>From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Bob
>Schacht
>Sent: 08 April 2008 22:11
>To: [hidden email]
>Subject: Re: Tests of "significance"
>
>At 06:00 AM 4/8/2008, Granaas, Michael wrote:
> >I apologize if this is a duplicate, but I didn't see my earlier response
> >show up on the list and I received no confirmation.
> >
> >It seems to me that if the test of independence is being rejected the
> >"plain English" explanation is that responses to items and outcomes are
> >correlated.
> >
> >It is likely wise to compute a phi-coefficient so that size of the
> >correlation can included in the description.  E.g., responses to item 7
> >were very slightly correlated with outcome while responses to item 12 were
> >strongly correlated with outcome.
> >
> >Michael
>
>
>Thanks to MIchael, Hector, Art, Jon, & Paul for your interesting and
>helpful replies! I like Michael's suggestion to use the word "correlated,"
>which seems to be widely understood, if often confused with causation. I
>also note how hard it is for most of us to abandon jargon and confine
>ourselves to common English. Will the following pass muster?
>
> >Statistically significant results.
>
>"For this question, there is at least a 95% chance that participant
>satisfaction and employment outcome are correlated."
>
> >Almost statistically significant
>
>"For this question, there is a 91% chance that participant satisfaction and
>employment outcome are correlated. However, this falls short of the 95%
>level usually required for statistical significance."
>
> >Not statistically significant
>
>"For this question, it does not appear that participant satisfaction and
>employment outcome are correlated."
>
>Thanks,
>Bob
>
>Robert M. Schacht, Ph.D. <[hidden email]>
>Pacific Basin Rehabilitation Research & Training Center
>1268 Young Street, Suite #204
>Research Center, University of Hawaii
>Honolulu, HI 96814
>
>=====================
>To manage your subscription to SPSSX-L, send a message to
>[hidden email] (not to SPSSX-L), with no body text except the
>command. To leave the list, send the command
>SIGNOFF SPSSX-L
>For a list of commands to manage subscriptions, send the command
>INFO REFCARD

Robert M. Schacht, Ph.D. <[hidden email]>
Pacific Basin Rehabilitation Research & Training Center
1268 Young Street, Suite #204
Research Center, University of Hawaii
Honolulu, HI 96814

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Tests of "significance"

Swank, Paul R
Whoa. See below.

Paul R. Swank, Ph.D.
Professor and Director of Research
Children's Learning Institute
University of Texas Health Science Center - Houston


-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Bob Schacht
Sent: Wednesday, April 09, 2008 3:32 PM
To: [hidden email]
Subject: Re: Tests of "significance"

At 04:53 PM 4/8/2008, Hector Maletta wrote:
>It is nice that you thank everybody, Bob, but Michael Granaas opinion
is not
>right, for several reasons:
>1. The original question was not about correlation but about chi
square,
>which concerns the difference between observed frequencies and those
>expected in case of randomness or independence.

Hector,
Thank you for your detailed response. You are helping me to see how
difficult it is for statisticians to  communicate with the "man on the
street."
I responded favorably to Michael Granaas' suggestion because the man on
the
street may recognize what "correlate" means, in a general sense, without
the kind of technical specificity that you (and most other
statisticians)
prefer. Because of your objection, I will shift my common-sense
statement
from this:

>Statistically significant results.

"For this question, there is at least a 95% chance that participant
satisfaction and employment outcome are correlated."

to this:

"For this question, there is at least a 95% chance that participant
satisfaction and employment outcome are related."

There is no "chance" that satisfaction and employment outcome are
related. They either are or they are not. The probability refers to the
chance that such a result (or one  even more extreme) would have
happened by chance assuming the null hypothesis is true. I don't give a
fig about what the man on the street will buy. Saying it wrong is saying
it wrong. Much of the complaining about null hypothesis testing comes
about because so many people interpret incorrectly. The result means
simply that you have some evidence to support the statement that they
are related. Does that mean they are? No. Does that mean they are not?
No. If you wish to make a probabilistic statement about the hypothesis,
become a Bayesian.

"Related" is a non-statistical term that is not identified with any
particular statistical procedure. Is this acceptable?

>2. Even in the case of evaluating the significance of a correlation,
the
>question of significance is not about the existence of correlation, but
>whether you (based on the correlation observed in a sample of a certain
>size) can infer --with a given degree of confidence-- that some nonzero
>correlation exists in the population. . . .

I would suggest that the man on the street cares not a fig for this
distinction, which he would probably regard as "pedantic," and we cannot
insist that he make a distinction that he does not care about and does
not
understand. However, you end up with an important distinction that he
might
understand:

>In either case, you can commit two kinds of errors:
>(i) False positives: You may conclude a nonzero correlation exists in
the
>population, when none actually exists.

Or, more generally, that a relationship exists between the variables in
the
population, when there actually is none.

>(ii) False negatives: You may conclude that you are not able to discard
>the possibility of a zero population correlation, when the population
>correlation is actually zero.

Or, more generally that a relationship does not exist when there is one,
although doing an equivalent linguistic transformation on your wording
eludes me. Your wording would be regarded by our man on the street as
hopeless mumbo jumbo.

>Also in either case, rejecting the null hypothesis is not equivalent to
>proving the truth of the research hypothesis (other research hypotheses
may
>be true instead of the one you are after).

Exactly. And this is the most important but difficult point to get
across:
that a simple positive is not necessarily the same as a double negative.
I
think we can cover this by using the language of chance, e.g., "There is
at
least a 95% chance that..."

>It is best to think of your
>conclusions in a cautious negative phrasing: "I am not able to discard
the
>null hypothesis that no correlation exists in the population", or "I am
not
>able to discard the hypothesis that some nonzero correlation exists in
the
>population", promptly adding that both these statements have in turn a
>certain probability of being in error.

Our man on the street would, I'm afraid, regard both formulations as
incomprehensible mumbo jumbo.

>Statistics is a course in humility.

That it is. I began my discourse on this subject with my assistant with
the
observation that "statisticians are humble folk..."
But, humble or not, we must find a way to accurately communicate with
the
man on the street that he can comprehend, rather than retreating behind
a
wall of  language that is, to him, incomprehensible.

I emphasize your word "accurately".

Bob




>-----Original Message-----
>From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf
Of Bob
>Schacht
>Sent: 08 April 2008 22:11
>To: [hidden email]
>Subject: Re: Tests of "significance"
>
>At 06:00 AM 4/8/2008, Granaas, Michael wrote:
> >I apologize if this is a duplicate, but I didn't see my earlier
response
> >show up on the list and I received no confirmation.
> >
> >It seems to me that if the test of independence is being rejected the
> >"plain English" explanation is that responses to items and outcomes
are
> >correlated.
> >
> >It is likely wise to compute a phi-coefficient so that size of the
> >correlation can included in the description.  E.g., responses to item
7
> >were very slightly correlated with outcome while responses to item 12
were
> >strongly correlated with outcome.
> >
> >Michael
>
>
>Thanks to MIchael, Hector, Art, Jon, & Paul for your interesting and
>helpful replies! I like Michael's suggestion to use the word
"correlated,"
>which seems to be widely understood, if often confused with causation.
I

>also note how hard it is for most of us to abandon jargon and confine
>ourselves to common English. Will the following pass muster?
>
> >Statistically significant results.
>
>"For this question, there is at least a 95% chance that participant
>satisfaction and employment outcome are correlated."
>
> >Almost statistically significant
>
>"For this question, there is a 91% chance that participant satisfaction
and
>employment outcome are correlated. However, this falls short of the 95%
>level usually required for statistical significance."
>
> >Not statistically significant
>
>"For this question, it does not appear that participant satisfaction
and

>employment outcome are correlated."
>
>Thanks,
>Bob
>
>Robert M. Schacht, Ph.D. <[hidden email]>
>Pacific Basin Rehabilitation Research & Training Center
>1268 Young Street, Suite #204
>Research Center, University of Hawaii
>Honolulu, HI 96814
>
>=====================
>To manage your subscription to SPSSX-L, send a message to
>[hidden email] (not to SPSSX-L), with no body text except
the
>command. To leave the list, send the command
>SIGNOFF SPSSX-L
>For a list of commands to manage subscriptions, send the command
>INFO REFCARD

Robert M. Schacht, Ph.D. <[hidden email]>
Pacific Basin Rehabilitation Research & Training Center
1268 Young Street, Suite #204
Research Center, University of Hawaii
Honolulu, HI 96814

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Tests of "significance"

Bob Schacht-3
In reply to this post by Bob Schacht-3
I proposed this wording for explaining a test of significance:
"For this question, there is at least a 95% chance that participant
satisfaction and employment outcome are related."


At 12:34 PM 4/9/2008, Swank, Paul R replied:
>There is no "chance" that satisfaction and employment outcome are related.
>They either are or they are not. The probability refers to the
>chance that such a result (or one  even more extreme) would have happened
>by chance assuming the null hypothesis is true.

The "chance," at least at the popular level, refers to the probability that
one's claim is correct, and that is the point of clarification that my
proposed statement may need. How about

"For this question, there is at least a 95% chance it would be correct to
claim that participant satisfaction and employment outcome are related."
My bet would be that most lay persons would not be able to see a dime's
worth of difference between the first statement (at the beginning of this
message), and the revised version, and would regard the extra words as
"pedantic."

I am constantly getting beaten about the head and shoulders because my
reports are too long, and "no one will read them"-- because my usual style
of discourse is towards statistical precision of the kind you are
advocating, rather than towards short declarative sentences with few
subordinate clauses.

>I don't give a fig about what the man on the street will buy.

Well then you are denying the basis of my question, which started from the
premise that I *do* care what the man on the street will understand. And I
*do* want them to understand correctly-- at least on an elementary level.
But I have learned that it is useless to demand that they understand with
the same level of detail that we are comfortable with.

>Saying it wrong is saying it wrong.

That is why I am asking in this forum. I do not want to say it "wrong;"
however, I also do not wish to go into any more detail than necessary, and
most of us have a much higher tolerance for details than the typical lay
person. I am exploring the question of whether we can be "right" without
going into excruciating detail.

>Much of the complaining about null hypothesis testing comes about because
>so many people interpret incorrectly. ...

So, how do we woo them into understanding without turning them off so that
they won't even read what we write?
Oh, wait.
>I don't give a fig about what the man on the street will buy.

Sorry, I guess I'm barking up the wrong tree.

Bob

Robert M. Schacht, Ph.D. <[hidden email]>
Pacific Basin Rehabilitation Research & Training Center
1268 Young Street, Suite #204
Research Center, University of Hawaii
Honolulu, HI 96814

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Tests of "significance"

Bob Schacht-3
In reply to this post by Bob Schacht-3
At 01:30 PM 4/9/2008, Hector Maletta wrote:
>Bob,
>You are very right in saying that it is often quite difficult to explain
>statistical results to the lay person without telling lies and without
>using incomprehensible mumbo-jumbo. However, this is so in many scientific
>fields (try to explain quantum mechanics or the operation of DNA and RNA
>to the ubiquitous man-in-the-street; this mythical character cannot
>probably grasp even the notion of a nanogram or natural selection: nearly
>half of Americans can't).

Right.

>What I find lacking in your explanation to the lay person is the notion
>that your confidence (or lack thereof) in the existence of a relationship
>is related to the size of your sample. For instance: Suppose again you
>find a correlation of r=0.02 with a sample of 50 million cases, and the
>result is statistically significant (p<0.01). So would you say "There is
>at least a 99% chance that the two variables are related"? Should you not
>add "but the relationship is vanishingly weak"? . . .


First, most tests of significance already take into account the sample
size, so one should not have to belabor the point unless the sample size is
small, and the test statistic is well within the bounds of normal
variation. In that case, what I should probably say is something like this:

(If not statistically significant:)

"For this question, it does not appear that participant satisfaction and
employment outcome are related. However, the sample size may be too small
to be confident about this conclusion."

(This comment would be especially appropriate if there are other reasons to
suspect that satisfaction and outcome for this question are indeed related.)

Is this any better?

Bob

Robert M. Schacht, Ph.D. <[hidden email]>
Pacific Basin Rehabilitation Research & Training Center
1268 Young Street, Suite #204
Research Center, University of Hawaii
Honolulu, HI 96814

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Tests of "significance"

Hector Maletta
Yes, it is better. Most significance tests do indeed take sample size into
account (in fact, the standard error of an estimate equals the sample
standard deviation divided by the square root of sample size). But your
client is not aware of that.

 

However, bear in mind that nearly any difference or correlation, however
small, may turn to be statistically significant with a large enough sample.
Making it statistically significant with a larger sample does not make it
more relevant or substantively more important. It only makes you more
confident that it is greater than zero in the population.

 

Hector

 

  _____  

From: Bob Schacht [mailto:[hidden email]]
Sent: 09 April 2008 22:14
To: Hector Maletta; [hidden email]
Subject: RE: Tests of "significance"

 

At 01:30 PM 4/9/2008, Hector Maletta wrote:



Bob,
You are very right in saying that it is often quite difficult to explain
statistical results to the lay person without telling lies and without using
incomprehensible mumbo-jumbo. However, this is so in many scientific fields
(try to explain quantum mechanics or the operation of DNA and RNA to the
ubiquitous man-in-the-street; this mythical character cannot probably grasp
even the notion of a nanogram or natural selection: nearly half of Americans
cant).


Right.




What I find lacking in your explanation to the lay person is the notion that
your confidence (or lack thereof) in the existence of a relationship is
related to the size of your sample. For instance: Suppose again you find a
correlation of r=0.02 with a sample of 50 million cases, and the result is
statistically significant (p<0.01). So would you say There is at least a
99% chance that the two variables are related? Should you not add but the
relationship is vanishingly weak? . . .  



First, most tests of significance already take into account the sample size,
so one should not have to belabor the point unless the sample size is small,
and the test statistic is well within the bounds of normal variation. In
that case, what I should probably say is something like this:

(If not statistically significant:)

"For this question, it does not appear that participant satisfaction and
employment outcome are related. However, the sample size may be too small to
be confident about this conclusion."

(This comment would be especially appropriate if there are other reasons to
suspect that satisfaction and outcome for this question are indeed related.)

Is this any better?

Bob



Robert M. Schacht, Ph.D. <[hidden email]>
Pacific Basin Rehabilitation Research & Training Center
1268 Young Street, Suite #204
Research Center, University of Hawaii
Honolulu, HI 96814

====================To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Tests of "significance"

Kornbrot, Diana
In reply to this post by Bob Schacht-3
Interesting discussion, BUT
Significance tests without descriptive statistics are ALWAYS completely
meaningless to everyone, lay public, scientists and statisticians.
So no wonder you can¹t find a satisfactory explanation for the significance
test alone ­ none exists.

So here are ny 2p worth of advice.
I a survey of N people
the probability that employed people answered they  were satisfied with
their life was p[employed], based on this number of respondents (Nemp) there
is a probability of 95% that the probability of life satisfaction in
Œsimilar¹ employed people is between p(lcl emp) and p(ucl emp)
the probability that unemployed people answered they  were satisfied with
their life was , based on this number of respondents (Nemp) there is a
probability of 95% that the probability of life satisfaction in Œsimilar¹
employed people is between p(lcl emp) and p(ucl emp), based on this number
of respondents (Nunemp) there is a probability of 95% that the probability
of life satisfaction in Œsimilar¹ unemployed people is between p(lcl unemp)
and p(ucl unemp)
the probability that this difference in probability of life satisfaction
occurred by chance is give true value of p(null).
If p(null) > 5%, add thus the observed difference is quite likely to have
occurred by chance
If p(null) < 5%, add thus the observed difference is unlikely to have
occurred by chance
You may, or may not, want to Œbother¹ lay audience with the confidence
levels in italics
With this form of reporting, that includes descriptive statistics
1. one gets to know the general level of life satisfaction
2. readers can judge for themselves whether any difference that is highly
unlikely to have occurred by chance is important
3. readers can judge for themselves whether some difference in magnitude
that is important to them has occurred. Some might wish to rush down to the
labour exchange [or tell their boss their true feelings], even if p(null)
was 0.055, o even 0.15.  Mysteriously in the v arious suggestions based in
significance alone, we do not even learn whether the difference favoured the
employed or unemployed.

Similar arguments apply to larger R*C Tables. Just as in ANOVA, post hoc
tests identify discrepant means, so in 2 way classification frequencies
[chi-square tests] one needs to identify the cells with probabilities higher
and lower than expected [that¹s why stats packages give one the cell
chi-square]

Best

Diana

On 10/4/08 02:02, "Bob Schacht" <[hidden email]> wrote:

> I proposed this wording for explaining a test of significance:
> "For this question, there is at least a 95% chance that participant
> satisfaction and employment outcome are related."
>
>
> At 12:34 PM 4/9/2008, Swank, Paul R replied:
>> >There is no "chance" that satisfaction and employment outcome are related.
>> >They either are or they are not. The probability refers to the
>> >chance that such a result (or one  even more extreme) would have happened
>> >by chance assuming the null hypothesis is true.
>
> The "chance," at least at the popular level, refers to the probability that
> one's claim is correct, and that is the point of clarification that my
> proposed statement may need. How about
>
> "For this question, there is at least a 95% chance it would be correct to
> claim that participant satisfaction and employment outcome are related."
> My bet would be that most lay persons would not be able to see a dime's
> worth of difference between the first statement (at the beginning of this
> message), and the revised version, and would regard the extra words as
> "pedantic."
>
> I am constantly getting beaten about the head and shoulders because my
> reports are too long, and "no one will read them"-- because my usual style
> of discourse is towards statistical precision of the kind you are
> advocating, rather than towards short declarative sentences with few
> subordinate clauses.
>
>> >I don't give a fig about what the man on the street will buy.
>
> Well then you are denying the basis of my question, which started from the
> premise that I *do* care what the man on the street will understand. And I
> *do* want them to understand correctly-- at least on an elementary level.
> But I have learned that it is useless to demand that they understand with
> the same level of detail that we are comfortable with.
>
>> >Saying it wrong is saying it wrong.
>
> That is why I am asking in this forum. I do not want to say it "wrong;"
> however, I also do not wish to go into any more detail than necessary, and
> most of us have a much higher tolerance for details than the typical lay
> person. I am exploring the question of whether we can be "right" without
> going into excruciating detail.
>
>> >Much of the complaining about null hypothesis testing comes about because
>> >so many people interpret incorrectly. ...
>
> So, how do we woo them into understanding without turning them off so that
> they won't even read what we write?
> Oh, wait.
>> >I don't give a fig about what the man on the street will buy.
>
> Sorry, I guess I'm barking up the wrong tree.
>
> Bob
>
> Robert M. Schacht, Ph.D. <[hidden email]>
> Pacific Basin Rehabilitation Research & Training Center
> 1268 Young Street, Suite #204
> Research Center, University of Hawaii
> Honolulu, HI 96814
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD


Professor Diana Kornbrot
 School of Psychology
 University of Hertfordshire
 College Lane, Hatfield, Hertfordshire AL10 9AB, UK

 email:  [hidden email]
 web:    http://web.mac.com/kornbrot/iweb/KornbrotHome.html
 voice:   +44 (0) 170 728 4626
 fax:      +44 (0) 170 728 5073
Home
 19 Elmhurst Avenue
 London N2 0LT, UK
   
    voice:   +44 (0) 208 883  3657
    mobile: +44 (0) 796 890 2102
    fax:      +44 (0) 870 706 4997

====================To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Tests of "significance"

Swank, Paul R
In reply to this post by Bob Schacht-3
As soon as you say there is a chance you are correct, then you are in
the Bayesian world. Frequentist statistics assume that a hypothesis is
either true or false at a given time. There is no probability (or
chance) associated with it. The chance is a priori. If the dreaded null
is true, then there is only a small chance (< .05)  that we would get
such a result or one even more discrepant with the null. That's why we
should avoid the term chance or probability when stating our results to
lay people. It takes a long time to grasp the concept of null hypothesis
testing if all those students I've seen over the years are any
indication. That's why I adopted an evidence based approach. I think
most people know enough law to understand evidence. They also likely
understand that someone can be declared guilty when they are not and
vice versa. Thus we could say we have evidence to support the claim that
the two variables are related or there is insufficient evidence to
support that claim. Wouldn't that be palatable to the man on the street?

 

Paul

 

Paul R. Swank, Ph.D.

Professor and Director of Research

Children's Learning Institute

University of Texas Health Science Center - Houston

 

From: Bob Schacht [mailto:[hidden email]]
Sent: Wednesday, April 09, 2008 8:02 PM
To: Swank, Paul R; [hidden email]
Subject: RE: Tests of "significance"

 

I proposed this wording for explaining a test of significance:
"For this question, there is at least a 95% chance that participant
satisfaction and employment outcome are related."


At 12:34 PM 4/9/2008, Swank, Paul R replied:



There is no "chance" that satisfaction and employment outcome are
related. They either are or they are not. The probability refers to the
chance that such a result (or one  even more extreme) would have
happened by chance assuming the null hypothesis is true.


The "chance," at least at the popular level, refers to the probability
that one's claim is correct, and that is the point of clarification that
my proposed statement may need. How about

"For this question, there is at least a 95% chance it would be correct
to claim that participant satisfaction and employment outcome are
related."
My bet would be that most lay persons would not be able to see a dime's
worth of difference between the first statement (at the beginning of
this message), and the revised version, and would regard the extra words
as "pedantic."

I am constantly getting beaten about the head and shoulders because my
reports are too long, and "no one will read them"-- because my usual
style of discourse is towards statistical precision of the kind you are
advocating, rather than towards short declarative sentences with few
subordinate clauses.




I don't give a fig about what the man on the street will buy.


Well then you are denying the basis of my question, which started from
the premise that I *do* care what the man on the street will understand.
And I *do* want them to understand correctly-- at least on an elementary
level. But I have learned that it is useless to demand that they
understand with the same level of detail that we are comfortable with.




Saying it wrong is saying it wrong.


That is why I am asking in this forum. I do not want to say it "wrong;"
however, I also do not wish to go into any more detail than necessary,
and most of us have a much higher tolerance for details than the typical
lay person. I am exploring the question of whether we can be "right"
without going into excruciating detail.




Much of the complaining about null hypothesis testing comes about
because so many people interpret incorrectly. ...


So, how do we woo them into understanding without turning them off so
that they won't even read what we write?
Oh, wait.



I don't give a fig about what the man on the street will buy.


Sorry, I guess I'm barking up the wrong tree.

Bob



Robert M. Schacht, Ph.D. <[hidden email]>
Pacific Basin Rehabilitation Research & Training Center
1268 Young Street, Suite #204
Research Center, University of Hawaii
Honolulu, HI 96814

====================To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Tests of "significance"

Granaas, Michael
In reply to this post by Bob Schacht-3
How about: "based on the data available we have evidence that participant satisfaction and employment outcome are related."  (I would add a comment about the size of the relation suggested by the data, perhaps as a confidence interval.)

To me the use of "95%" (or any other percentage) is arriving at a Bayesian conclusion using frequentist methods.  Or, perhaps, it is an effort to create a confidence interval where none exists.  Either way I am not comfortable.

Oddly enough I would be comfortable if you reported a 95% CI for the effect size.  

One place were frequentists and Bayesians agree is that you need replication to create some level of confidence in your results.  However, frequentist approaches are often taught as if a single study decides the issue.  Here we have a single study/analysis which provides evidence that the null is false and we can say we have evidence that there is a relation.

If additional studies/analyses reached the same conclusion then I think we can talk about levels of confidence in the finding.

Michael

****************************************************
Michael Granaas             [hidden email]
Assoc. Prof.                Phone: 605 677 5295
Dept. of Psychology         FAX:  605 677 3195
University of South Dakota
414 E. Clark St.
Vermillion, SD 57069
*****************************************************




-----Original Message-----
From: SPSSX(r) Discussion on behalf of Bob Schacht
Sent: Wed 4/9/08 8:02 PM
To: [hidden email]
Subject: Re: Tests of "significance"
 
I proposed this wording for explaining a test of significance:
"For this question, there is at least a 95% chance that participant
satisfaction and employment outcome are related."


At 12:34 PM 4/9/2008, Swank, Paul R replied:
>There is no "chance" that satisfaction and employment outcome are related.
>They either are or they are not. The probability refers to the
>chance that such a result (or one  even more extreme) would have happened
>by chance assuming the null hypothesis is true.

The "chance," at least at the popular level, refers to the probability that
one's claim is correct, and that is the point of clarification that my
proposed statement may need. How about

"For this question, there is at least a 95% chance it would be correct to
claim that participant satisfaction and employment outcome are related."
My bet would be that most lay persons would not be able to see a dime's
worth of difference between the first statement (at the beginning of this
message), and the revised version, and would regard the extra words as
"pedantic."

I am constantly getting beaten about the head and shoulders because my
reports are too long, and "no one will read them"-- because my usual style
of discourse is towards statistical precision of the kind you are
advocating, rather than towards short declarative sentences with few
subordinate clauses.

>I don't give a fig about what the man on the street will buy.

Well then you are denying the basis of my question, which started from the
premise that I *do* care what the man on the street will understand. And I
*do* want them to understand correctly-- at least on an elementary level.
But I have learned that it is useless to demand that they understand with
the same level of detail that we are comfortable with.

>Saying it wrong is saying it wrong.

That is why I am asking in this forum. I do not want to say it "wrong;"
however, I also do not wish to go into any more detail than necessary, and
most of us have a much higher tolerance for details than the typical lay
person. I am exploring the question of whether we can be "right" without
going into excruciating detail.

>Much of the complaining about null hypothesis testing comes about because
>so many people interpret incorrectly. ...

So, how do we woo them into understanding without turning them off so that
they won't even read what we write?
Oh, wait.
>I don't give a fig about what the man on the street will buy.

Sorry, I guess I'm barking up the wrong tree.

Bob

Robert M. Schacht, Ph.D. <[hidden email]>
Pacific Basin Rehabilitation Research & Training Center
1268 Young Street, Suite #204
Research Center, University of Hawaii
Honolulu, HI 96814

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

====================To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Tests of "significance"

Hector Maletta
Michael,
I do not understand your idea that mentioning a confidence level (such as
95%) is equivalent to "arriving at a Bayesian conclusion using frequentist
methods." First of all, Bayesian analysis is not akin to frequentist
conceptions of probability; second, statements about confidence levels are
not [necessarily] [or most often] Bayesian.
It is also odd that you think that using such a confidence level is "an
effort to create a confidence interval where none exists". In fact, whenever
you have a sample you have a sampling standard error; and whenever you have
the standard error of an estimate (which equals the population standard
deviation divided by the square root of the sample size), the choice of a
confidence level (95% or whatever) automatically determines a confidence
interval: the confidence interval is defined as the interval around the
population mean, measured in standard errors, containing (in a normal
distribution) 95% (or whatever percentage) of all potential means obtained
from random samples of the given size.
That confidence interval would be narrower if your sample size is larger, of
course.
Hector


-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Granaas, Michael
Sent: 10 April 2008 16:03
To: [hidden email]
Subject: Re: Tests of "significance"

How about: "based on the data available we have evidence that participant
satisfaction and employment outcome are related."  (I would add a comment
about the size of the relation suggested by the data, perhaps as a
confidence interval.)

To me the use of "95%" (or any other percentage) is arriving at a Bayesian
conclusion using frequentist methods.  Or, perhaps, it is an effort to
create a confidence interval where none exists.  Either way I am not
comfortable.

Oddly enough I would be comfortable if you reported a 95% CI for the effect
size.

One place were frequentists and Bayesians agree is that you need replication
to create some level of confidence in your results.  However, frequentist
approaches are often taught as if a single study decides the issue.  Here we
have a single study/analysis which provides evidence that the null is false
and we can say we have evidence that there is a relation.

If additional studies/analyses reached the same conclusion then I think we
can talk about levels of confidence in the finding.

Michael

****************************************************
Michael Granaas             [hidden email]
Assoc. Prof.                Phone: 605 677 5295
Dept. of Psychology         FAX:  605 677 3195
University of South Dakota
414 E. Clark St.
Vermillion, SD 57069
*****************************************************




-----Original Message-----
From: SPSSX(r) Discussion on behalf of Bob Schacht
Sent: Wed 4/9/08 8:02 PM
To: [hidden email]
Subject: Re: Tests of "significance"

I proposed this wording for explaining a test of significance:
"For this question, there is at least a 95% chance that participant
satisfaction and employment outcome are related."


At 12:34 PM 4/9/2008, Swank, Paul R replied:
>There is no "chance" that satisfaction and employment outcome are related.
>They either are or they are not. The probability refers to the
>chance that such a result (or one  even more extreme) would have happened
>by chance assuming the null hypothesis is true.

The "chance," at least at the popular level, refers to the probability that
one's claim is correct, and that is the point of clarification that my
proposed statement may need. How about

"For this question, there is at least a 95% chance it would be correct to
claim that participant satisfaction and employment outcome are related."
My bet would be that most lay persons would not be able to see a dime's
worth of difference between the first statement (at the beginning of this
message), and the revised version, and would regard the extra words as
"pedantic."

I am constantly getting beaten about the head and shoulders because my
reports are too long, and "no one will read them"-- because my usual style
of discourse is towards statistical precision of the kind you are
advocating, rather than towards short declarative sentences with few
subordinate clauses.

>I don't give a fig about what the man on the street will buy.

Well then you are denying the basis of my question, which started from the
premise that I *do* care what the man on the street will understand. And I
*do* want them to understand correctly-- at least on an elementary level.
But I have learned that it is useless to demand that they understand with
the same level of detail that we are comfortable with.

>Saying it wrong is saying it wrong.

That is why I am asking in this forum. I do not want to say it "wrong;"
however, I also do not wish to go into any more detail than necessary, and
most of us have a much higher tolerance for details than the typical lay
person. I am exploring the question of whether we can be "right" without
going into excruciating detail.

>Much of the complaining about null hypothesis testing comes about because
>so many people interpret incorrectly. ...

So, how do we woo them into understanding without turning them off so that
they won't even read what we write?
Oh, wait.
>I don't give a fig about what the man on the street will buy.

Sorry, I guess I'm barking up the wrong tree.

Bob

Robert M. Schacht, Ph.D. <[hidden email]>
Pacific Basin Rehabilitation Research & Training Center
1268 Young Street, Suite #204
Research Center, University of Hawaii
Honolulu, HI 96814

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD


To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: Tests of "significance"

Kornbrot, Diana
I continue to INSIST, you MUST have DESCRIPTIVE STATISTICS. Preferably with
confi8dence limits

diana


On 11/4/08 05:27, "Hector Maletta" <[hidden email]> wrote:

> Michael,
> I do not understand your idea that mentioning a confidence level (such as
> 95%) is equivalent to "arriving at a Bayesian conclusion using frequentist
> methods." First of all, Bayesian analysis is not akin to frequentist
> conceptions of probability; second, statements about confidence levels are
> not [necessarily] [or most often] Bayesian.
> It is also odd that you think that using such a confidence level is "an
> effort to create a confidence interval where none exists". In fact, whenever
> you have a sample you have a sampling standard error; and whenever you have
> the standard error of an estimate (which equals the population standard
> deviation divided by the square root of the sample size), the choice of a
> confidence level (95% or whatever) automatically determines a confidence
> interval: the confidence interval is defined as the interval around the
> population mean, measured in standard errors, containing (in a normal
> distribution) 95% (or whatever percentage) of all potential means obtained
> from random samples of the given size.
> That confidence interval would be narrower if your sample size is larger, of
> course.
> Hector
>
>
> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
> Granaas, Michael
> Sent: 10 April 2008 16:03
> To: [hidden email]
> Subject: Re: Tests of "significance"
>
> How about: "based on the data available we have evidence that participant
> satisfaction and employment outcome are related."  (I would add a comment
> about the size of the relation suggested by the data, perhaps as a
> confidence interval.)
>
> To me the use of "95%" (or any other percentage) is arriving at a Bayesian
> conclusion using frequentist methods.  Or, perhaps, it is an effort to
> create a confidence interval where none exists.  Either way I am not
> comfortable.
>
> Oddly enough I would be comfortable if you reported a 95% CI for the effect
> size.
>
> One place were frequentists and Bayesians agree is that you need replication
> to create some level of confidence in your results.  However, frequentist
> approaches are often taught as if a single study decides the issue.  Here we
> have a single study/analysis which provides evidence that the null is false
> and we can say we have evidence that there is a relation.
>
> If additional studies/analyses reached the same conclusion then I think we
> can talk about levels of confidence in the finding.
>
> Michael
>
> ****************************************************
> Michael Granaas             [hidden email]
> Assoc. Prof.                Phone: 605 677 5295
> Dept. of Psychology         FAX:  605 677 3195
> University of South Dakota
> 414 E. Clark St.
> Vermillion, SD 57069
> *****************************************************
>
>
>
>
> -----Original Message-----
> From: SPSSX(r) Discussion on behalf of Bob Schacht
> Sent: Wed 4/9/08 8:02 PM
> To: [hidden email]
> Subject: Re: Tests of "significance"
>
> I proposed this wording for explaining a test of significance:
> "For this question, there is at least a 95% chance that participant
> satisfaction and employment outcome are related."
>
>
> At 12:34 PM 4/9/2008, Swank, Paul R replied:
>> >There is no "chance" that satisfaction and employment outcome are related.
>> >They either are or they are not. The probability refers to the
>> >chance that such a result (or one  even more extreme) would have happened
>> >by chance assuming the null hypothesis is true.
>
> The "chance," at least at the popular level, refers to the probability that
> one's claim is correct, and that is the point of clarification that my
> proposed statement may need. How about
>
> "For this question, there is at least a 95% chance it would be correct to
> claim that participant satisfaction and employment outcome are related."
> My bet would be that most lay persons would not be able to see a dime's
> worth of difference between the first statement (at the beginning of this
> message), and the revised version, and would regard the extra words as
> "pedantic."
>
> I am constantly getting beaten about the head and shoulders because my
> reports are too long, and "no one will read them"-- because my usual style
> of discourse is towards statistical precision of the kind you are
> advocating, rather than towards short declarative sentences with few
> subordinate clauses.
>
>> >I don't give a fig about what the man on the street will buy.
>
> Well then you are denying the basis of my question, which started from the
> premise that I *do* care what the man on the street will understand. And I
> *do* want them to understand correctly-- at least on an elementary level.
> But I have learned that it is useless to demand that they understand with
> the same level of detail that we are comfortable with.
>
>> >Saying it wrong is saying it wrong.
>
> That is why I am asking in this forum. I do not want to say it "wrong;"
> however, I also do not wish to go into any more detail than necessary, and
> most of us have a much higher tolerance for details than the typical lay
> person. I am exploring the question of whether we can be "right" without
> going into excruciating detail.
>
>> >Much of the complaining about null hypothesis testing comes about because
>> >so many people interpret incorrectly. ...
>
> So, how do we woo them into understanding without turning them off so that
> they won't even read what we write?
> Oh, wait.
>> >I don't give a fig about what the man on the street will buy.
>
> Sorry, I guess I'm barking up the wrong tree.
>
> Bob
>
> Robert M. Schacht, Ph.D. <[hidden email]>
> Pacific Basin Rehabilitation Research & Training Center
> 1268 Young Street, Suite #204
> Research Center, University of Hawaii
> Honolulu, HI 96814
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
>
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD




Professor Diana Kornbrot
   email:  [hidden email]
   web:    http://web.mac.com/kornbrot/iweb/KornbrotHome.html
Work
School of Psychology
University of Hertfordshire
College Lane, Hatfield, Hertfordshire AL10 9AB, UK
    voice:     +44 (0) 170 728 4626
    mobile:   +44 (0) 796 890 2102
    fax          +44 (0) 170 728 5073
Home
19 Elmhurst Avenue
London N2 0LT, UK
   landline: +44 (0) 208 883 3657
   mobile:   +44 (0) 796 890 2102
   fax:         +44 (0) 870 706 4997

====================To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
12