If it seems the newest studies are always reporting some new link – an association between two things or an increase or decrease in this, that or the other – it’s not your imagination.
Positive findings, those which find … “something,” tend to end up in journals more often. But a recent study in PLOS ONE suggests that this trend has decreased, thanks to a change in trial reporting standards around the year 2000.
From 1970 to 1999, more than half (57 percent) of all big-budget clinical trials funded by the National Heart, Lung and Blood Institute focused on treating or preventing heart disease found that the drug or supplement under study had a positive effect. Yet from 2000 to 2012, only 8 percent of such studies found positive results.
What happened? ClinicalTrials.gov happened. Following a 1997 law that required researchers testing drugs or dietary supplements on humans to register their trials, the National Institutes of Health National Library of Medicine launched ClinicalTrials.gov on Feb. 29, 2000. Researchers receiving U.S. federal funds henceforth had to identify their projected outcomes and register their trials in the publicly searchable database, regardless of where in the world the research took place.
One of the results of this transparency requirement appears to be a reduction in publication bias, the PLOS ONE researchers from Oregon State University found. Publication bias refers to a bias in what gets published relative to all the research that is conducted. Publication bias can occur at point of submission (what researchers seek publication for) or selection (what journal editors decide to publish). For years, publication bias has tended to favor positive findings – those studies which find an increase or decrease or association in whatever they are measuring. Studies that find no change or no difference or no association tend to be less interesting and traditionally get published less frequently. In some cases, null findings are news, such as another study finding no link between autism and the MMR vaccine.
But with drug studies in particular, a publication bias toward positive findings means only studies finding a benefit tend to get published. The problem occurs if only two studies that were actually conducted show a positive benefit, while 20 others show no benefit. If those others are not submitted for publication, then the peer-reviewed, published evidence base would offer a skewed appearance of a drug’s clinical value. Researchers – including those funded by industry – might only report what was successful in a study, leaving out the null findings. By requiring researchers to report what they’re testing before they conduct their trial, the new standards reduce the likelihood that researchers might change what they’re looking for after the results come in.
The findings of this new study are good news for journalists in two ways. First, journalists can feel confident that what they’re seeing published in the research literature more accurately represents what researchers are finding when they test drugs or supplements.
Second, for those enterprising journalists interested in a scoop borne of deep digging, story ideas may be waiting among the trials registered with ClinicalTrials.gov. What trials were registered but never published? What drugs were tested before and after 2000, and did the published findings in trials related to those drugs change after the new transparency requirements?
Here’s another question. While this PLOS ONE study focused on studies in cardiovascular disease, does a similar pattern occur in other areas of medicine? (Probably.) The same researchers are investigating whether a similar trend has occurred with clinical trials involving behavioral interventions, since researchers of these trials are encouraged but not required to register their trials. Their findings will likely be illuminating as well.