Are epidemological studies almost worthless?

20 February, 2007

Browsing through Nassim Nicholas Taleb’s diary I noticed this quote:

At the AAAS conference in San Francisco I was a discutant of session in which John Ioannidis showed that 4 out of 5 epidemiological “statistically significant” studies fail to replicate in controlled experiments.

NNT crows that this is what he has already come to describe as the narrative fallacy. If you look hard enough at enough data, you will see a pattern emerge.

Anyway I have looked up John Ioannidis’s research and found this interesting paper Why Most Published Research Findings Are False
, which unfortunately I haven’t had the chance to read in full.

The outline of his idea is simple enough, if you look at enough data (particularly small data sets) you will find statistically significant relationships. The part I thought was interesting was this.

As has been shown previously, the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical significance.

Which is kind of obvious. If I correlate enough astrological data with some disease I will inevitably find some correlation, but because the prior probability of it being true is essentially zero there is still very little chance of the study being true.
Read the rest of this entry »


Back

20 February, 2007

I’ve moved, and for the most part settled in, so with the exception of a week away coming up soon, I hope to get a bit more active in blogsville.

I’m thinking of reviewing some articles I started and put on hold and maybe putting them out to get me back on the horse.