A few choice problems and quotations:
Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials
most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
In the article, Tom Siegfried also discusses the difficulties of Bayesian logic and meta-analyses, complete with examples of the various problems in boxes at the end.
I can't escape the feeling that most people who read this will nod their heads, agree sagely, and then proceed with research as normal. It ought at the least to sound a warning bell for anyone (like myself) engaged in policy-relevant research to be a bit more humble about our results.
No comments:
Post a Comment