Rotten statistics

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Rotten statistics

Hector Maletta
I am copying, for the edification of list members, an article from this week's The Economist. The URL is http://www.economist.com/science/displaystory.cfm?story_id=8733754.
Hector

Medical statistics
Signs of the times
Feb 22nd 2007 | SAN FRANCISCO
From The Economist print edition

Why so much medical research is rot

PEOPLE born under the astrological sign of Leo are 15% more likely to be admitted to hospital with gastric bleeding than those born under the other 11 signs. Sagittarians are 38% more likely than others to land up there because of a broken arm. Those are the conclusions that many medical researchers would be forced to make from a set of data presented to the American Association for the Advancement of Science by Peter Austin of the Institute for Clinical Evaluative Sciences in Toronto. At least, they would be forced to draw them if they applied the lax statistical methods of their own work to the records of hospital admissions in Ontario, Canada, used by Dr Austin.

Dr Austin, of course, does not draw those conclusions. His point was to shock medical researchers into using better statistics, because the ones they routinely employ today run the risk of identifying relationships when, in fact, there are none. He also wanted to explain why so many health claims that look important when they are first made are not substantiated in later studies.


The confusion arises because each result is tested separately to see how likely, in statistical terms, it was to have happened by chance. If that likelihood is below a certain threshold, typically 5%, then the convention is that an effect is �real�. And that is fine if only one hypothesis is being tested. But if, say, 20 are being tested at the same time, then on average one of them will be accepted as provisionally true, even though it is not.

In his own study, Dr Austin tested 24 hypotheses, two for each astrological sign. He was looking for instances in which a certain sign �caused� an increased risk of a particular ailment. The hypotheses about Leos' intestines and Sagittarians' arms were less than 5% likely to have come about by chance, satisfying the usual standards of proof of a relationship. However, when he modified his statistical methods to take into account the fact that he was testing 24 hypotheses, not one, the boundary of significance dropped dramatically. At that point, none of the astrological associations remained.

Unfortunately, many researchers looking for risk factors for diseases are not aware that they need to modify their statistics when they test multiple hypotheses. The consequence of that mistake, as John Ioannidis of the University of Ioannina School of Medicine, in Greece, explained to the meeting, is that a lot of observational health studies�those that go trawling through databases, rather than relying on controlled experiments�cannot be reproduced by other researchers. Previous work by Dr Ioannidis, on six highly cited observational studies, showed that conclusions from five of them were later refuted. In the new work he presented to the meeting, he looked systematically at the causes of bias in such research and confirmed that the results of observational studies are likely to be completely correct only 20% of the time. If such a study tests many hypotheses, the likelihood its conclusions are correct may drop as low as one in 1,000�and studies that appear to find larger
effects are likely, in fact, simply to have more bias.

So, the next time a newspaper headline declares that something is bad for you, read the small print. If the scientists used the wrong statistical method, you may do just as well believing your horoscope.
Reply | Threaded
Open this post in threaded view
|

Re: Rotten statistics

Mark A Davenport MADAVENP
Bravo Hector!  We learned this in our first statistics course.  What are
they teaching in  these medical schools?

***************************************************************************************************************************************************************
Mark A. Davenport Ph.D.
Senior Research Analyst
Office of Institutional Research
The University of North Carolina at Greensboro
336.256.0395
[hidden email]

'An approximate answer to the right question is worth a good deal more
than an exact answer to an approximate question.' --a paraphrase of J. W.
Tukey (1962)






Hector Maletta <[hidden email]>
Sent by: "SPSSX(r) Discussion" <[hidden email]>
02/23/2007 09:30 AM
Please respond to
[hidden email]


To
[hidden email]
cc

Subject
Rotten statistics






I am copying, for the edification of list members, an article from this
week's The Economist. The URL is
http://www.economist.com/science/displaystory.cfm?story_id=8733754.
Hector

Medical statistics
Signs of the times
Feb 22nd 2007 | SAN FRANCISCO
From The Economist print edition

Why so much medical research is rot

PEOPLE born under the astrological sign of Leo are 15% more likely to be
admitted to hospital with gastric bleeding than those born under the other
11 signs. Sagittarians are 38% more likely than others to land up there
because of a broken arm. Those are the conclusions that many medical
researchers would be forced to make from a set of data presented to the
American Association for the Advancement of Science by Peter Austin of the
Institute for Clinical Evaluative Sciences in Toronto. At least, they
would be forced to draw them if they applied the lax statistical methods
of their own work to the records of hospital admissions in Ontario,
Canada, used by Dr Austin.

Dr Austin, of course, does not draw those conclusions. His point was to
shock medical researchers into using better statistics, because the ones
they routinely employ today run the risk of identifying relationships
when, in fact, there are none. He also wanted to explain why so many
health claims that look important when they are first made are not
substantiated in later studies.


The confusion arises because each result is tested separately to see how
likely, in statistical terms, it was to have happened by chance. If that
likelihood is below a certain threshold, typically 5%, then the convention
is that an effect is ?real?. And that is fine if only one hypothesis is
being tested. But if, say, 20 are being tested at the same time, then on
average one of them will be accepted as provisionally true, even though it
is not.

In his own study, Dr Austin tested 24 hypotheses, two for each
astrological sign. He was looking for instances in which a certain sign
?caused? an increased risk of a particular ailment. The hypotheses about
Leos' intestines and Sagittarians' arms were less than 5% likely to have
come about by chance, satisfying the usual standards of proof of a
relationship. However, when he modified his statistical methods to take
into account the fact that he was testing 24 hypotheses, not one, the
boundary of significance dropped dramatically. At that point, none of the
astrological associations remained.

Unfortunately, many researchers looking for risk factors for diseases are
not aware that they need to modify their statistics when they test
multiple hypotheses. The consequence of that mistake, as John Ioannidis of
the University of Ioannina School of Medicine, in Greece, explained to the
meeting, is that a lot of observational health studies?those that go
trawling through databases, rather than relying on controlled
experiments?cannot be reproduced by other researchers. Previous work by Dr
Ioannidis, on six highly cited observational studies, showed that
conclusions from five of them were later refuted. In the new work he
presented to the meeting, he looked systematically at the causes of bias
in such research and confirmed that the results of observational studies
are likely to be completely correct only 20% of the time. If such a study
tests many hypotheses, the likelihood its conclusions are correct may drop
as low as one in 1,000?and studies that appear to find larger
effects are likely, in fact, simply to have more bias.

So, the next time a newspaper headline declares that something is bad for
you, read the small print. If the scientists used the wrong statistical
method, you may do just as well believing your horoscope.
Reply | Threaded
Open this post in threaded view
|

Re: Rotten statistics

Stephen Hampshire
My thought exactly - how can any medical researchers *not* be aware of this?

Freedman's reservations about regression (for example) pale into
insignificance if this level of statistical ineptitude is commonplace.

Stephen

> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]]On Behalf Of
> Mark A Davenport MADAVENP
> Sent: 23 February 2007 15:55
> To: [hidden email]
> Subject: Re: Rotten statistics
>
>
> Bravo Hector!  We learned this in our first statistics course.  What are
> they teaching in  these medical schools?
>
> ******************************************************************
> ******************************************************************
> ***************************
> Mark A. Davenport Ph.D.
> Senior Research Analyst
> Office of Institutional Research
> The University of North Carolina at Greensboro
> 336.256.0395
> [hidden email]
>
> 'An approximate answer to the right question is worth a good deal more
> than an exact answer to an approximate question.' --a paraphrase of J. W.
> Tukey (1962)
>
>
>
>
>
>
> Hector Maletta <[hidden email]>
> Sent by: "SPSSX(r) Discussion" <[hidden email]>
> 02/23/2007 09:30 AM
> Please respond to
> [hidden email]
>
>
> To
> [hidden email]
> cc
>
> Subject
> Rotten statistics
>
>
>
>
>
>
> I am copying, for the edification of list members, an article from this
> week's The Economist. The URL is
> http://www.economist.com/science/displaystory.cfm?story_id=8733754.
> Hector
>
> Medical statistics
> Signs of the times
> Feb 22nd 2007 | SAN FRANCISCO
> From The Economist print edition
>
> Why so much medical research is rot
>
> PEOPLE born under the astrological sign of Leo are 15% more likely to be
> admitted to hospital with gastric bleeding than those born under
> the other
> 11 signs. Sagittarians are 38% more likely than others to land up there
> because of a broken arm. Those are the conclusions that many medical
> researchers would be forced to make from a set of data presented to the
> American Association for the Advancement of Science by Peter
> Austin of the
> Institute for Clinical Evaluative Sciences in Toronto. At least, they
> would be forced to draw them if they applied the lax statistical methods
> of their own work to the records of hospital admissions in Ontario,
> Canada, used by Dr Austin.
>
> Dr Austin, of course, does not draw those conclusions. His point was to
> shock medical researchers into using better statistics, because the ones
> they routinely employ today run the risk of identifying relationships
> when, in fact, there are none. He also wanted to explain why so many
> health claims that look important when they are first made are not
> substantiated in later studies.
>
>
> The confusion arises because each result is tested separately to see how
> likely, in statistical terms, it was to have happened by chance. If that
> likelihood is below a certain threshold, typically 5%, then the
> convention
> is that an effect is ?real?. And that is fine if only one hypothesis is
> being tested. But if, say, 20 are being tested at the same time, then on
> average one of them will be accepted as provisionally true, even
> though it
> is not.
>
> In his own study, Dr Austin tested 24 hypotheses, two for each
> astrological sign. He was looking for instances in which a certain sign
> ?caused? an increased risk of a particular ailment. The hypotheses about
> Leos' intestines and Sagittarians' arms were less than 5% likely to have
> come about by chance, satisfying the usual standards of proof of a
> relationship. However, when he modified his statistical methods to take
> into account the fact that he was testing 24 hypotheses, not one, the
> boundary of significance dropped dramatically. At that point, none of the
> astrological associations remained.
>
> Unfortunately, many researchers looking for risk factors for diseases are
> not aware that they need to modify their statistics when they test
> multiple hypotheses. The consequence of that mistake, as John
> Ioannidis of
> the University of Ioannina School of Medicine, in Greece,
> explained to the
> meeting, is that a lot of observational health studies?those that go
> trawling through databases, rather than relying on controlled
> experiments?cannot be reproduced by other researchers. Previous
> work by Dr
> Ioannidis, on six highly cited observational studies, showed that
> conclusions from five of them were later refuted. In the new work he
> presented to the meeting, he looked systematically at the causes of bias
> in such research and confirmed that the results of observational studies
> are likely to be completely correct only 20% of the time. If such a study
> tests many hypotheses, the likelihood its conclusions are correct
> may drop
> as low as one in 1,000?and studies that appear to find larger
> effects are likely, in fact, simply to have more bias.
>
> So, the next time a newspaper headline declares that something is bad for
> you, read the small print. If the scientists used the wrong statistical
> method, you may do just as well believing your horoscope.
Reply | Threaded
Open this post in threaded view
|

Re: Rotten statistics

Hector Maletta
        I would not jump to general conclusions. Most medical research is
fine and highly sophisticated. Most medical journals are extremely careful
in their standards.
        There might be somewhat looser standards, of course, in certain
institutions and certain branches of medical science, and everyone can
remember certain sensational announcements about something being good or bad
for your health, which later on come to nothing much.
        I chose to circulate the article as a general warning against facile
statistical "proof", and to teach again the difference between truth and
statistical significance. If a group of false propositions are, say, 4%
likely to be observed in samples just by chance, then testing 25 such
propositions on any given sample would on average "prove" one of them to be
"true", in the sense of being statistically significant. Conversely, a false
proposition with p=0.04 will be found to be statistically significant in one
out of every 25 samples, even if it is false by hypothesis (like the link
between Sagittarius and broken arms).
        Perhaps, as some list member pointed out privately to me, this is
not an SPSS issue, but I thought that the audience of this list includes
many people who are not professional statisticians or seasoned analysts, and
may be likely to fall into the same kind of error if they are not careful.
        Hope I was mistaken.

        Hector

        -----Mensaje original-----
De: SPSSX(r) Discussion [mailto:[hidden email]] En nombre de
Stephen Hampshire
Enviado el: 23 February 2007 17:11
Para: [hidden email]
Asunto: Re: Rotten statistics

        My thought exactly - how can any medical researchers *not* be aware
of this?

        Freedman's reservations about regression (for example) pale into
        insignificance if this level of statistical ineptitude is
commonplace.

        Stephen

        > -----Original Message-----
        > From: SPSSX(r) Discussion [mailto:[hidden email]]On
Behalf Of
        > Mark A Davenport MADAVENP
        > Sent: 23 February 2007 15:55
        > To: [hidden email]
        > Subject: Re: Rotten statistics
        >
        >
        > Bravo Hector!  We learned this in our first statistics course.
What are
        > they teaching in  these medical schools?
        >
        > ******************************************************************
        > ******************************************************************
        > ***************************
        > Mark A. Davenport Ph.D.
        > Senior Research Analyst
        > Office of Institutional Research
        > The University of North Carolina at Greensboro
        > 336.256.0395
        > [hidden email]
        >
        > 'An approximate answer to the right question is worth a good deal
more
        > than an exact answer to an approximate question.' --a paraphrase
of J. W.
        > Tukey (1962)
        >
        >
        >
        >
        >
        >
        > Hector Maletta <[hidden email]>
        > Sent by: "SPSSX(r) Discussion" <[hidden email]>
        > 02/23/2007 09:30 AM
        > Please respond to
        > [hidden email]
        >
        >
        > To
        > [hidden email]
        > cc
        >
        > Subject
        > Rotten statistics
        >
        >
        >
        >
        >
        >
        > I am copying, for the edification of list members, an article from
this
        > week's The Economist. The URL is
        >
http://www.economist.com/science/displaystory.cfm?story_id=8733754.
        > Hector
        >
        > Medical statistics
        > Signs of the times
        > Feb 22nd 2007 | SAN FRANCISCO
        > From The Economist print edition
        >
        > Why so much medical research is rot
        >
        > PEOPLE born under the astrological sign of Leo are 15% more likely
to be
        > admitted to hospital with gastric bleeding than those born under
        > the other
        > 11 signs. Sagittarians are 38% more likely than others to land up
there
        > because of a broken arm. Those are the conclusions that many
medical
        > researchers would be forced to make from a set of data presented
to the
        > American Association for the Advancement of Science by Peter
        > Austin of the
        > Institute for Clinical Evaluative Sciences in Toronto. At least,
they
        > would be forced to draw them if they applied the lax statistical
methods
        > of their own work to the records of hospital admissions in
Ontario,
        > Canada, used by Dr Austin.
        >
        > Dr Austin, of course, does not draw those conclusions. His point
was to
        > shock medical researchers into using better statistics, because
the ones
        > they routinely employ today run the risk of identifying
relationships
        > when, in fact, there are none. He also wanted to explain why so
many
        > health claims that look important when they are first made are not
        > substantiated in later studies.
        >
        >
        > The confusion arises because each result is tested separately to
see how
        > likely, in statistical terms, it was to have happened by chance.
If that
        > likelihood is below a certain threshold, typically 5%, then the
        > convention
        > is that an effect is ?real?. And that is fine if only one
hypothesis is
        > being tested. But if, say, 20 are being tested at the same time,
then on
        > average one of them will be accepted as provisionally true, even
        > though it
        > is not.
        >
        > In his own study, Dr Austin tested 24 hypotheses, two for each
        > astrological sign. He was looking for instances in which a certain
sign
        > ?caused? an increased risk of a particular ailment. The hypotheses
about
        > Leos' intestines and Sagittarians' arms were less than 5% likely
to have
        > come about by chance, satisfying the usual standards of proof of a
        > relationship. However, when he modified his statistical methods to
take
        > into account the fact that he was testing 24 hypotheses, not one,
the
        > boundary of significance dropped dramatically. At that point, none
of the
        > astrological associations remained.
        >
        > Unfortunately, many researchers looking for risk factors for
diseases are
        > not aware that they need to modify their statistics when they test
        > multiple hypotheses. The consequence of that mistake, as John
        > Ioannidis of
        > the University of Ioannina School of Medicine, in Greece,
        > explained to the
        > meeting, is that a lot of observational health studies?those that
go
        > trawling through databases, rather than relying on controlled
        > experiments?cannot be reproduced by other researchers. Previous
        > work by Dr
        > Ioannidis, on six highly cited observational studies, showed that
        > conclusions from five of them were later refuted. In the new work
he
        > presented to the meeting, he looked systematically at the causes
of bias
        > in such research and confirmed that the results of observational
studies
        > are likely to be completely correct only 20% of the time. If such
a study
        > tests many hypotheses, the likelihood its conclusions are correct
        > may drop
        > as low as one in 1,000?and studies that appear to find larger
        > effects are likely, in fact, simply to have more bias.
        >
        > So, the next time a newspaper headline declares that something is
bad for
        > you, read the small print. If the scientists used the wrong
statistical
        > method, you may do just as well believing your horoscope.
Reply | Threaded
Open this post in threaded view
|

Re: Rotten statistics

James Cantor
In reply to this post by Mark A Davenport MADAVENP
> Bravo Hector!  We learned this in our first statistics
> course.  What are they teaching in  these medical schools?

How to get some medicine done in between frivolous lawsuits and HMO
forms.
Reply | Threaded
Open this post in threaded view
|

Re: Rotten statistics

Mark A Davenport MADAVENP
In reply to this post by Hector Maletta
Hector,

I beg to disagree that this is not an SPSS issue.  It is.  It is also a
SAS issue, and a STATA issue, and a LISREL, etc.  We have made statistical
software so easy to use a caveman can do it (my apologies to all of you
GEICO spokescavepersons on the list).  It's not just a matter of teaching
stats students about core assumptions and the ramifications of violating
those assumptions, it's a matter of making sure that occassional users
(like medical, social science, or educational practitioners who may not
be, by training, researchers) are reminded that you can throw numbers into
windows and get garbage for output but you may not get an error message or
warning from the system telling you of such.  You are correct, I
overgeneralized about medical personnel.  However, I don't think I
overreacted to the severity of the issue.

I also don't think that discussions like these are inappropriate for this
forum.  Many students frequent this list.  That article was a great post
and I think it would be beneficial for all SPSS users to see more of them
from time to time.  As for me, this article will go in my teaching file
for use in the occasional research courses that I teach.

Mark

***************************************************************************************************************************************************************
Mark A. Davenport Ph.D.
Senior Research Analyst
Office of Institutional Research
The University of North Carolina at Greensboro
336.256.0395
[hidden email]

'An approximate answer to the right question is worth a good deal more
than an exact answer to an approximate question.' --a paraphrase of J. W.
Tukey (1962)






Hector Maletta <[hidden email]>
Sent by: "SPSSX(r) Discussion" <[hidden email]>
02/23/2007 12:59 PM
Please respond to
Hector Maletta <[hidden email]>


To
[hidden email]
cc

Subject
Re: Rotten statistics






        I would not jump to general conclusions. Most medical research is
fine and highly sophisticated. Most medical journals are extremely careful
in their standards.
        There might be somewhat looser standards, of course, in certain
institutions and certain branches of medical science, and everyone can
remember certain sensational announcements about something being good or
bad
for your health, which later on come to nothing much.
        I chose to circulate the article as a general warning against
facile
statistical "proof", and to teach again the difference between truth and
statistical significance. If a group of false propositions are, say, 4%
likely to be observed in samples just by chance, then testing 25 such
propositions on any given sample would on average "prove" one of them to
be
"true", in the sense of being statistically significant. Conversely, a
false
proposition with p=0.04 will be found to be statistically significant in
one
out of every 25 samples, even if it is false by hypothesis (like the link
between Sagittarius and broken arms).
        Perhaps, as some list member pointed out privately to me, this is
not an SPSS issue, but I thought that the audience of this list includes
many people who are not professional statisticians or seasoned analysts,
and
may be likely to fall into the same kind of error if they are not careful.
        Hope I was mistaken.

        Hector

        -----Mensaje original-----
De: SPSSX(r) Discussion [mailto:[hidden email]] En nombre de
Stephen Hampshire
Enviado el: 23 February 2007 17:11
Para: [hidden email]
Asunto: Re: Rotten statistics

        My thought exactly - how can any medical researchers *not* be
aware
of this?

        Freedman's reservations about regression (for example) pale into
        insignificance if this level of statistical ineptitude is
commonplace.

        Stephen

        > -----Original Message-----
        > From: SPSSX(r) Discussion [mailto:[hidden email]]On
Behalf Of
        > Mark A Davenport MADAVENP
        > Sent: 23 February 2007 15:55
        > To: [hidden email]
        > Subject: Re: Rotten statistics
        >
        >
        > Bravo Hector!  We learned this in our first statistics course.
What are
        > they teaching in  these medical schools?
        >
        >
******************************************************************
        >
******************************************************************
        > ***************************
        > Mark A. Davenport Ph.D.
        > Senior Research Analyst
        > Office of Institutional Research
        > The University of North Carolina at Greensboro
        > 336.256.0395
        > [hidden email]
        >
        > 'An approximate answer to the right question is worth a good
deal
more
        > than an exact answer to an approximate question.' --a paraphrase
of J. W.
        > Tukey (1962)
        >
        >
        >
        >
        >
        >
        > Hector Maletta <[hidden email]>
        > Sent by: "SPSSX(r) Discussion" <[hidden email]>
        > 02/23/2007 09:30 AM
        > Please respond to
        > [hidden email]
        >
        >
        > To
        > [hidden email]
        > cc
        >
        > Subject
        > Rotten statistics
        >
        >
        >
        >
        >
        >
        > I am copying, for the edification of list members, an article
from
this
        > week's The Economist. The URL is
        >
http://www.economist.com/science/displaystory.cfm?story_id=8733754.
        > Hector
        >
        > Medical statistics
        > Signs of the times
        > Feb 22nd 2007 | SAN FRANCISCO
        > From The Economist print edition
        >
        > Why so much medical research is rot
        >
        > PEOPLE born under the astrological sign of Leo are 15% more
likely
to be
        > admitted to hospital with gastric bleeding than those born under
        > the other
        > 11 signs. Sagittarians are 38% more likely than others to land
up
there
        > because of a broken arm. Those are the conclusions that many
medical
        > researchers would be forced to make from a set of data presented
to the
        > American Association for the Advancement of Science by Peter
        > Austin of the
        > Institute for Clinical Evaluative Sciences in Toronto. At least,
they
        > would be forced to draw them if they applied the lax statistical
methods
        > of their own work to the records of hospital admissions in
Ontario,
        > Canada, used by Dr Austin.
        >
        > Dr Austin, of course, does not draw those conclusions. His point
was to
        > shock medical researchers into using better statistics, because
the ones
        > they routinely employ today run the risk of identifying
relationships
        > when, in fact, there are none. He also wanted to explain why so
many
        > health claims that look important when they are first made are
not
        > substantiated in later studies.
        >
        >
        > The confusion arises because each result is tested separately to
see how
        > likely, in statistical terms, it was to have happened by chance.
If that
        > likelihood is below a certain threshold, typically 5%, then the
        > convention
        > is that an effect is ?real?. And that is fine if only one
hypothesis is
        > being tested. But if, say, 20 are being tested at the same time,
then on
        > average one of them will be accepted as provisionally true, even
        > though it
        > is not.
        >
        > In his own study, Dr Austin tested 24 hypotheses, two for each
        > astrological sign. He was looking for instances in which a
certain
sign
        > ?caused? an increased risk of a particular ailment. The
hypotheses
about
        > Leos' intestines and Sagittarians' arms were less than 5% likely
to have
        > come about by chance, satisfying the usual standards of proof of
a
        > relationship. However, when he modified his statistical methods
to
take
        > into account the fact that he was testing 24 hypotheses, not
one,
the
        > boundary of significance dropped dramatically. At that point,
none
of the
        > astrological associations remained.
        >
        > Unfortunately, many researchers looking for risk factors for
diseases are
        > not aware that they need to modify their statistics when they
test
        > multiple hypotheses. The consequence of that mistake, as John
        > Ioannidis of
        > the University of Ioannina School of Medicine, in Greece,
        > explained to the
        > meeting, is that a lot of observational health studies?those
that
go
        > trawling through databases, rather than relying on controlled
        > experiments?cannot be reproduced by other researchers. Previous
        > work by Dr
        > Ioannidis, on six highly cited observational studies, showed
that
        > conclusions from five of them were later refuted. In the new
work
he
        > presented to the meeting, he looked systematically at the causes
of bias
        > in such research and confirmed that the results of observational
studies
        > are likely to be completely correct only 20% of the time. If
such
a study
        > tests many hypotheses, the likelihood its conclusions are
correct
        > may drop
        > as low as one in 1,000?and studies that appear to find larger
        > effects are likely, in fact, simply to have more bias.
        >
        > So, the next time a newspaper headline declares that something
is
bad for
        > you, read the small print. If the scientists used the wrong
statistical
        > method, you may do just as well believing your horoscope.
Reply | Threaded
Open this post in threaded view
|

Re: Rotten statistics

zstatman
In reply to this post by Hector Maletta
I so agree Mark. However, I believe Hector was referring to applying
methodologies versus applying software.

Cheers,
WMB
Statistical Services

=========================================
mailto:[hidden email]
http://home.earthlink.net/~statmanz
=========================================

Virus Scan Notice: This email is certified to be virus free.


On 2/23/2007 1:34:16 PM, [hidden email] wrote:

> Hector,
>
> I beg to disagree that this is not an SPSS issue.  It is.  It is also a
> SAS issue, and a STATA issue, and a LISREL, etc.  We have made
> statistical
> software so easy to use a caveman can do it (my apologies to all of you
> GEICO spokescavepersons on the list).
> It's not just a matter of teaching
> stats students about core assumptions and the ramifications of violating
> those assumptions, it's
> a matter of making sure that occassional users
> (like medical, social science, or educational practitioners who may not
> be, by training, researchers) are reminded that you can throw numbers
> into
> windows and get garbage for output but you may not get an error message
> or
> warning from the system telling you of such.  You are correct, I
> overgeneralized about medical personnel.  However, I
> don't think I
> overreacted to the severity of the issue.
>
> I also don't
> think that discussions like these are inappropriate for this
> forum.  Many students frequent this list.  That article was a great post
> and I think it would be beneficial for all SPSS users to see more of them
> from time to time.  As for me, this article will go in my teaching file
> for use in the occasional research courses
Will
Statistical Services
 
============
info.statman@earthlink.net
http://home.earthlink.net/~z_statman/
============
Reply | Threaded
Open this post in threaded view
|

Re: Rotten statistics

Hector Maletta
        Will is right, but both are related. Easy software may have
contributed to the increasing number of "instant statisticians", as we have
dubbed them some time ago in this very forum.
        Hector

        -----Mensaje original-----
De: SPSSX(r) Discussion [mailto:[hidden email]] En nombre de Will
Bailey [Statman]
Enviado el: 23 February 2007 19:51
Para: [hidden email]
Asunto: Re: Rotten statistics

        I so agree Mark. However, I believe Hector was referring to applying
        methodologies versus applying software.

        Cheers,
        WMB
        Statistical Services

        =========================================
        mailto:[hidden email]
        http://home.earthlink.net/~statmanz
        =========================================

        Virus Scan Notice: This email is certified to be virus free.


        On 2/23/2007 1:34:16 PM, [hidden email] wrote:
        > Hector,
        >
        > I beg to disagree that this is not an SPSS issue.  It is.  It is
also a
        > SAS issue, and a STATA issue, and a LISREL, etc.  We have made
        > statistical
        > software so easy to use a caveman can do it (my apologies to all
of you
        > GEICO spokescavepersons on the list).
        > It's not just a matter of teaching
        > stats students about core assumptions and the ramifications of
violating
        > those assumptions, it's
        > a matter of making sure that occassional users
        > (like medical, social science, or educational practitioners who
may not
        > be, by training, researchers) are reminded that you can throw
numbers
        > into
        > windows and get garbage for output but you may not get an error
message
        > or
        > warning from the system telling you of such.  You are correct, I
        > overgeneralized about medical personnel.  However, I
        > don't think I
        > overreacted to the severity of the issue.
        >
        > I also don't
        > think that discussions like these are inappropriate for this
        > forum.  Many students frequent this list.  That article was a
great post
        > and I think it would be beneficial for all SPSS users to see more
of them
        > from time to time.  As for me, this article will go in my teaching
file
        > for use in the occasional research courses
Reply | Threaded
Open this post in threaded view
|

Re: Rotten statistics

Richard Ristow
In reply to this post by Hector Maletta
At 09:59 AM 2/23/2007, Hector Maletta wrote:

>         Perhaps, as some list member pointed
> out privately to me, this is
>not an SPSS issue, but I thought that the
>audience of this list includes
>many people who are not professional
>statisticians or seasoned analysts, and
>may be likely to fall into the same kind of
>error if they are not careful.

Weighing in, in strong agreement with Mark
Davenport (10:34 AM 2/23/2007),

By general consent, and long-standing practice,
questions about statistical methodology are
welcome and useful here, whether the methodology
is specific to SPSS or not. Where would we be
without Marta García-Granero? And her
contributions are heavily to methodology.

See, for example, current thread "Interaction
terms in regression" (begun by Maria Sapouna at
02:50 AM 2/22/2007) is really methodology, though
I see I mention actual SPSS syntax in responding. ;-)
Reply | Threaded
Open this post in threaded view
|

Re: Rotten statistics

Angshu Bhowmik
Thank you Hector - not just for the link to the article but for your
excellent, even though brief, analysis.

In reply to some of the comments made by other members, most medical schools
do not teach any statistics worth mentioning. Due to increasing quantities
of medical knowledge that students must learn these days, you will be
shocked to hear that even complete human anatomy (which was a core part of
the medical curriculum when I was in Med school about 20 years ago) is no
longer included - only a basic version is included at many UK medical
schools.

Moreover, as more patients' organisations complain about poor bedside
manner, the teaching of "communication skills" has been occupying an ever
increasing proportion of the med school curriculum at the expense of
subjects such as Pharmacology. Sounds shocking, doesn't it? Yet, Med school
authorities have managed to respond to public demands by distorting
priorities in such a way as to value communication skills over the basic
medical knowledge that doctors are required to communicate! (In my opinion,
the basic medical knowledge should have been given priority and bedside
manner taught by example as part of the practical teaching at a later stage
- which is what I teach).

You can imagine that statistics is therefore so low in the list of
priorities as not to be taught at all. Indeed, having completed a period of
three years in research, I had no formal statistical training at all,
leading me to become one of those SPSS-aided "instant statisticians" derided
by someone earlier in this thread. I do not defend myself - my theoretical
knowledge of statistics is poor. But there was never any access or funding
available to enlist the services of a proper statistician (although I had
scientist colleagues who helped) and it is likely that this state of affairs
will continue. One can only hope that more "proper statisticians" read the
medical literature and write to the editors whenever they come across
examples of poor statistical methodology. Only in this way can awareness of
the problem be increased.

Best wishes,

Angshu Bhowmik
Consultant in Respiratory and General (Internal) Medicine
London, UK

_________________________________________________________________
Refi Now: Rates near 39yr lows!  $430,000 Mortgage for $1,399/mo - Calculate
new payment
http://www.lowermybills.com/lre/index.jsp?sourceid=lmb-9632-17727&moid=7581
Reply | Threaded
Open this post in threaded view
|

Re: Rotten statistics

statisticsdoc
In reply to this post by Hector Maletta
Hector,

If I can chime in alongside Mark and Richard, your posting is exactly the
type of contribution that enriches everyone's understanding of their work,
and makes the SPSS list particularly helpful for researchers and
statisticians.

The article you posted is reminiscent of a paper I read back in the 1970's
in the UK.  The main finding of this work was that the outcomes of surgery
were significantly better in National Health Service hospitals that served
Cheddar cheese in the staff cafeteria.

Best,

Stephen Brand


For personalized and professional consultation in statistics and research
design, visit
www.statisticsdoc.com

 Perhaps, as some list member pointed out privately to me, this is
not an SPSS issue, but I thought that the audience of this list includes
many people who are not professional statisticians or seasoned analysts, and
may be likely to fall into the same kind of error if they are not careful.
        Hope I was mistaken.

        Hector