In the medical world, mice seem to be making out pretty well. So far, researchers have reportedly cured mice of type 1 diabetes, prostate cancer, Alzheimer’s disease and blindness, just to name a few.
Most of the time, seasoned health reporters and editors take these murine miracles with a grain of salt. They know that advances in animals don’t often make it to humans.
One explanation for this high failure rate has been that while animal biology can be very similar to our own, it’s often not close enough for therapies to translate.
But there’s another, and perhaps more sinister, reason treatments often look so good in mice but fail so miserably in people: biased reporting of the study results.
In a new “meta-analysis of meta-analyses,” published in the journal PLoS Biology, researchers looked at the datasets from more than 1,400 animal studies in neurologic disorders like stroke, Parkinson’s disease and Alzheimer’s.
They compared the number of studies with results that reached statistical significance to the number that might be expected to hit that same target, based on the statistical power of each study to reach different plausible effect sizes — a total of 4,445 comparisons.
“…almost 40% of studies reported statistically significant results — nearly twice as many as would be expected on the basis of the number of animal subjects,” wrote Heidi Ledford in an article for Nature News.
“The results are too good to be true,” said senior study author John P.A. Ioannidis, a professor of medicine at Stanford’s Prevention Research Center, in an interview with Ledford.
Other recent studies have found that the same problems that plague human trials bedevil animal research, too: Publication bias, small study sizes, and uncontrolled, unblinded experiments that lack clear goals, can all make interventions look more promising than they really are.
Geoffrey Mohan skillfully summed up the problem in his lede for the Los Angeles Times:
“Too much good news in medicine may be bad news for science, according to a new study that suggests animal research is riddled with bias that allows too many treatments to advance to human trials.”
The study researchers found that only eight of the 160 treatments they examined had “strong and statistically significant benefits in animals,” or the kind of data that should propel a treatment into human testing. However, when researchers tried to replicate one of those experiments — a study that tested melatonin for stroke prevention — they found no benefit, suggesting that even the studies that look good may still have “compromised internal validity,” AKA “major flaws.”
Pity the poor mice.