Do you have all the evidence on a drug that you’re reporting about?

Photo: Esther Dyson via Flickr

The registration and reporting requirements of are vital to informing the evidence base – but only if study sponsors actually use it and keep entries updated. A recent research letter in JAMA Internal Medicine suggests this is not happening often enough. That’s helpful for journalists to know if they are attempting to find all the recent evidence on a particular drug or intervention.

“Missing or incomplete reporting of clinical trial results and its scientific and ethical consequences are well documented,” wrote Kevin M. Fain, J.D., M.P.H., Dr.PH, and his team at the National Institutes of Health.

“One concrete example of this problem occurs when a sponsor conducts several studies of a particular drug for a particular condition, but only some (or none) of the studies make their way into the public domain, leaving a distorted body of public evidence.”

The researchers in this study looked for trials that had the same industry sponsors and tested the same drug for the same condition. They then compared how many of those trials had results posted in PubMed or They limited their search to phase 2, 3 and 4 studies that included at least one U.S. site and that ended or stopped between January 2007 and December 2009.

The researchers only examined a convenience sample, so their findings did not rely on analyzing every identical sponsor-drug-condition set that ended 2007-09. They stopped after identifying 96 sets. Their 96 sponsor-drug-condition combinations included 86 drugs tested in 329 trials.

It often is a good idea to ask researchers why they opt for a convenience sample, so I reached out to Fain, senior adviser for policy and research at, which is within the NIH’s National Library of Medicine.

“We used convenience sampling because the process of identifying each unique sponsor-drug-condition trial set was resource-intensive, requiring manual searches of data to determine if relevant information was identical in two or more study records,” Fain told me.

“This approach ensured that we compiled accurate trials sets,” he continued. “Although these 96 trial sets may not represent all trial sets in the data, we believe this method was an effective approach to assess a novel question of where results of trials by the same sponsor, drug, and condition were being reported, if reported at all.”

Fain’s team then looked for results reported from each trial up to seven years after its completion/termination and whether the drug had received FDA approval for any indication. Here are some of the findings:

  • Of the 329 trials, 76 percent had publicly reported results.
  • About a third (34 percent) had been published both in PubMed and on One in five (19 percent) were only in PubMed, and 24 percent were only reported on
  • Only 60 percent of the 96 combinations had reported results for all trials conducted, and 13 of the sponsor-drug-condition sets (13.5 percent) had no publicly reported data available at all.
  • Of the 55 FDA-approved drugs studied in 214 trials, 86 percent had publicly reported results; 30 percent were only in
  • Of the 31 unapproved drugs studied in 115 trials, just 61 percent had publicly reported results.

That’s a lot of numbers to take in, but the bottom line is that many trials are occurring without the results being publicly reported as they should.

“Results remain unavailable in or PubMed seven or more years after study completion for nearly one-quarter of sampled drug trials and more than one-tenth of sampled sponsor-drug-condition trial sets,” the authors wrote.

What does that mean? For journalists, it means you may not have all the data related to a particular drug that you think you have, even with a thorough search on PubMed. At the least, you should be looking for interim or unpublished findings reported at if you want to cover all your bases.

Further, if you are relying on systematic reviews and meta-analyses, they may not represent the full evidence base either. Especially if those unavailable results are negative findings – not having them in a systematic review can skew the conclusions.

These reporting inconsistencies are a research problem, but it can skew your reporting if you think you are drawing on the entire evidence base and you’re not.

Leave a Reply