Login  Register

Re: Rotten statistics

Posted by Hector Maletta on Feb 23, 2007; 5:59pm
URL: http://spssx-discussion.165.s1.nabble.com/Rotten-statistics-tp1074053p1074056.html

        I would not jump to general conclusions. Most medical research is
fine and highly sophisticated. Most medical journals are extremely careful
in their standards.
        There might be somewhat looser standards, of course, in certain
institutions and certain branches of medical science, and everyone can
remember certain sensational announcements about something being good or bad
for your health, which later on come to nothing much.
        I chose to circulate the article as a general warning against facile
statistical "proof", and to teach again the difference between truth and
statistical significance. If a group of false propositions are, say, 4%
likely to be observed in samples just by chance, then testing 25 such
propositions on any given sample would on average "prove" one of them to be
"true", in the sense of being statistically significant. Conversely, a false
proposition with p=0.04 will be found to be statistically significant in one
out of every 25 samples, even if it is false by hypothesis (like the link
between Sagittarius and broken arms).
        Perhaps, as some list member pointed out privately to me, this is
not an SPSS issue, but I thought that the audience of this list includes
many people who are not professional statisticians or seasoned analysts, and
may be likely to fall into the same kind of error if they are not careful.
        Hope I was mistaken.

        Hector

        -----Mensaje original-----
De: SPSSX(r) Discussion [mailto:[hidden email]] En nombre de
Stephen Hampshire
Enviado el: 23 February 2007 17:11
Para: [hidden email]
Asunto: Re: Rotten statistics

        My thought exactly - how can any medical researchers *not* be aware
of this?

        Freedman's reservations about regression (for example) pale into
        insignificance if this level of statistical ineptitude is
commonplace.

        Stephen

        > -----Original Message-----
        > From: SPSSX(r) Discussion [mailto:[hidden email]]On
Behalf Of
        > Mark A Davenport MADAVENP
        > Sent: 23 February 2007 15:55
        > To: [hidden email]
        > Subject: Re: Rotten statistics
        >
        >
        > Bravo Hector!  We learned this in our first statistics course.
What are
        > they teaching in  these medical schools?
        >
        > ******************************************************************
        > ******************************************************************
        > ***************************
        > Mark A. Davenport Ph.D.
        > Senior Research Analyst
        > Office of Institutional Research
        > The University of North Carolina at Greensboro
        > 336.256.0395
        > [hidden email]
        >
        > 'An approximate answer to the right question is worth a good deal
more
        > than an exact answer to an approximate question.' --a paraphrase
of J. W.
        > Tukey (1962)
        >
        >
        >
        >
        >
        >
        > Hector Maletta <[hidden email]>
        > Sent by: "SPSSX(r) Discussion" <[hidden email]>
        > 02/23/2007 09:30 AM
        > Please respond to
        > [hidden email]
        >
        >
        > To
        > [hidden email]
        > cc
        >
        > Subject
        > Rotten statistics
        >
        >
        >
        >
        >
        >
        > I am copying, for the edification of list members, an article from
this
        > week's The Economist. The URL is
        >
http://www.economist.com/science/displaystory.cfm?story_id=8733754.
        > Hector
        >
        > Medical statistics
        > Signs of the times
        > Feb 22nd 2007 | SAN FRANCISCO
        > From The Economist print edition
        >
        > Why so much medical research is rot
        >
        > PEOPLE born under the astrological sign of Leo are 15% more likely
to be
        > admitted to hospital with gastric bleeding than those born under
        > the other
        > 11 signs. Sagittarians are 38% more likely than others to land up
there
        > because of a broken arm. Those are the conclusions that many
medical
        > researchers would be forced to make from a set of data presented
to the
        > American Association for the Advancement of Science by Peter
        > Austin of the
        > Institute for Clinical Evaluative Sciences in Toronto. At least,
they
        > would be forced to draw them if they applied the lax statistical
methods
        > of their own work to the records of hospital admissions in
Ontario,
        > Canada, used by Dr Austin.
        >
        > Dr Austin, of course, does not draw those conclusions. His point
was to
        > shock medical researchers into using better statistics, because
the ones
        > they routinely employ today run the risk of identifying
relationships
        > when, in fact, there are none. He also wanted to explain why so
many
        > health claims that look important when they are first made are not
        > substantiated in later studies.
        >
        >
        > The confusion arises because each result is tested separately to
see how
        > likely, in statistical terms, it was to have happened by chance.
If that
        > likelihood is below a certain threshold, typically 5%, then the
        > convention
        > is that an effect is ?real?. And that is fine if only one
hypothesis is
        > being tested. But if, say, 20 are being tested at the same time,
then on
        > average one of them will be accepted as provisionally true, even
        > though it
        > is not.
        >
        > In his own study, Dr Austin tested 24 hypotheses, two for each
        > astrological sign. He was looking for instances in which a certain
sign
        > ?caused? an increased risk of a particular ailment. The hypotheses
about
        > Leos' intestines and Sagittarians' arms were less than 5% likely
to have
        > come about by chance, satisfying the usual standards of proof of a
        > relationship. However, when he modified his statistical methods to
take
        > into account the fact that he was testing 24 hypotheses, not one,
the
        > boundary of significance dropped dramatically. At that point, none
of the
        > astrological associations remained.
        >
        > Unfortunately, many researchers looking for risk factors for
diseases are
        > not aware that they need to modify their statistics when they test
        > multiple hypotheses. The consequence of that mistake, as John
        > Ioannidis of
        > the University of Ioannina School of Medicine, in Greece,
        > explained to the
        > meeting, is that a lot of observational health studies?those that
go
        > trawling through databases, rather than relying on controlled
        > experiments?cannot be reproduced by other researchers. Previous
        > work by Dr
        > Ioannidis, on six highly cited observational studies, showed that
        > conclusions from five of them were later refuted. In the new work
he
        > presented to the meeting, he looked systematically at the causes
of bias
        > in such research and confirmed that the results of observational
studies
        > are likely to be completely correct only 20% of the time. If such
a study
        > tests many hypotheses, the likelihood its conclusions are correct
        > may drop
        > as low as one in 1,000?and studies that appear to find larger
        > effects are likely, in fact, simply to have more bias.
        >
        > So, the next time a newspaper headline declares that something is
bad for
        > you, read the small print. If the scientists used the wrong
statistical
        > method, you may do just as well believing your horoscope.