Linda and Phil Montague
On a hike near Tucson.
Mid 1980s
Back on May 22, 2012, I wrote a blog entry about the
conclusions of one Dr. Ioannidis of Stanford University, a respected
biostatistician, that “It can be shown that most published research is
wrong.” I gave my own version of that dictat: “ It can be shown that most
published conclusions are based on faulty statistics”, or something like that.
Obviously, a proposition can be correct even though the statistical evidence is
weak. Unfortunately, it also is possible
for a proposition to be false, even if it is backed by pretty good stats. I don’t want to get into stuff like
confidence intervals, statistical “power”, Type 2 errors, and other equally
arcane and unamusing topics; I can only
get you to read these things if I keep them light and frothy. But, anyway, it seems that Dr. I is to be
taken seriously. I base that on the
observation that no less a serious rag than the Economist made his ideas their cover story this week. (How
Science Goes Wrong.) I would urge
you to read it yourself, but I know most of you won’t . It’s not light and frothy, that’s for sure. So I sacrificed myself and read it
carefully. I think it boils down to the following set of propositions:
There is too much crap being published. This is a result of several mutually
supporting forces:
Publish
of perish. You can’t make full professor
if you publish just one paper per year, even if it’s pretty good. Six sacks of you-know-what outweigh one sack
of the real stuff.
Nobody
wants to publish negative results. If
you do an experiment and find out that it doesn’t work you are unlikely to
publish it. If you do send it to a
journal, most of the time they will
reject it.
Science is supposed to remain pure because “truth” is only
ascertained after an initial experiment is duplicated. Also, purity theoretically is enhanced by the
process of “peer review”, involving several experts in your field reading your
paper with a critical eye, and then telling the editor whether or not it should
be published. However:
Many
clever “experiments” have demonstrated that most referees, as these folks are
called, are lazy bastards who don’t really do the job right. And why not? – they have their own papers to
write. Nobody gets promoted for being a
good referee. I was a referee on many a
paper, and I should be ashamed of myself.
Besides, referees don’t get paid.
Nobody
wants to repeat an experiment that someone else already has done. If you get the same result, well
–Whoopee! If you get a different result,
who knows who is right? You get no merit
increases or promotions for checking other peoples’ work. They pay off on original discoveries.
Funding
agencies are very unlikely to give you a big grant just to check on the work of
somebody else. They want something new,
cutting edge, roll back the frontiers -
stuff like that. Besides – your
colleagues won’t like you if you spend all your time trying to prove them
wrong. Nobody will have lunch with you
at the annual scientific conference.
Most scientists are poor statisticians. (That goes for me, in spades.) Increasingly, researchers are called upon to
tease meaning out of enormous data sets.
This is particularly true in biomedical research, but also in some
branches of physics and psychology. Often
the result sought is hidden in an enormous amount of noise. If it were obvious, somebody would have found
it long ago.
So, the Economist suggests
a bunch of quick fixes. Some the
granting agencies are beginning to insist upon, which is good. Others, though, are pie in the sky. We will continue to muddle through, I
fear. Maybe the Jack Andrakas of the
world will clean up the mess. My
generation never will.
P.S. Was this the most boring thing I ever wrote? Probably.