The authors begin with a somewhat gratifying hypothesis, writing that “Although it is easy to blame journalists for poor quality reporting, problems with coverage could begin with the journalists’ sources,” and positing that difficult-to-decipher studies and misleading press releases could lead to low-caliber health reporting.
They looked at 100 studies from five major journals, as well as a sample of 348 news stories based on those studies. In general they found that higher-quality press releases led to higher-quality coverage. Unfortunately, they also found that the inverse was true. Here’s an excerpt from the “Discussion” subheading (also highlighted by Schwitzer).
…Higher quality press releases issued by medical journals were associated with higher quality reporting in subsequent newspaper stories. In fact, the influence of press releases on subsequent newspaper stories was generally stronger than that of journal abstracts. Fundamental information such as absolute risks, harms, and limitations was more likely to be reported in newspaper stories when this information appeared in a medical journal press release than when it was missing from the press release or if no press release was issued. Furthermore, our data suggest that poor quality press releases were worse than no press release being issued: fundamental information was less likely to be reported in newspaper stories when it was missing from the press release than where no press release was issued at all.
Reporters looking for a Health News Review-style “how do I ensure my story clears their quality bar?” checklist can just scroll down to the “Quality Assessment” subheading. For the record, the metrics found there apply equally well to the PR professionals who write the releases.
The conclusions are based on a survey of meta-analyses of individual participant data, which the authors broke down by data source characteristics and publication status. The work is heavy on statistical analysis, but even lay readers can understand the broad strokes of what appears to be a widespread issue.
Steve Nissen, the lead author of the analysis, said 35 of the 42 studies he looked at were unpublished and were obtained only because a court case required the drug’s maker, GlaxoSmithKline, to turn over the data.
And it isn’t just pharmaceutical companies’ financial concerns driving the suppression, Nissen and his coauthors found. At that point, it may more of an issue of confirmation bias and other problems which have always lurked within academic research.
A surprising finding in the BMJ analysis was that serious lapses occurred even in clinical trials funded by the National Institutes of Health.
That research showed that less than half of NIH-funded clinical trials were published in a medical journal within 30 months of the completion of the trial and after 51 months, one-third of trials remained unpublished.
In BMJ, Bob Roehr wrote about a report published by German researchers in the Canadian Medical Association Journal describing an apparent tendency for journals that accept pharmaceutical advertising to publish more positive drug-related articles than those that depend on subscription dollars to pay the bills. The study and the Roehr’s summary are good reading in their own right, but the comment section is where things really get interesting.
There, Age of Autism UK editor John Stone points to a commentary penned by the Alliance for Human Research Protection’s Vera Hassner Sharav and draws into question BMJ‘s sources of funding. His main focus is the tension between that publication’s Andrew Wakefield investigations and its receipt of money from an arm of Merck.
Sharav’s language is somewhat incendiary, but it’s BMJ editor Fiona Godlee’s response to her commentary (and Stone’s post) that push the whole thing into the realm of the remarkable. Godlee weighs in on everything right there in the comment thread, admitting that BMJ had not disclosed those conflicts of interest in the Wakefield stories simply “because it didn’t occur to us to do so,” given that it was a story focused on research fraud rather than upon vaccines and medicine.
Although Vera’s claims may seem far fetched on this occasion, she is right that we should have declared the BMJ Group’s income from Merck as a competing interest to the editorial (and the two editor’s choice articles) that accompanied Brian Deer’s series on the Secrets of the MMR scare. We should also, as you say, have declared the group’s income from GSK as a competing interest in relation to these articles. We will publish clarifications.
The whole chain of events is a promising sign that increased interactivity in online publications may lead to increased transparency, and it’s well worth reading, at the very least, all of Roehr’s story and the comments that follow it. All the key bits are there.
Pia Christensen (@AHCJ_Pia) is the managing editor/online services for AHCJ. She manages the content and development of healthjournalism.org, coordinates AHCJ's social media efforts and edits and manages production of association guides, programs and newsletters.
The Internet and other media are abuzz with the news, published by BMJ yesterday, that the study published in The Lancet in 1998 by Dr. Andrew Wakefield linking autism to the MMR vaccine was fraudulent. The study of 12 children is frequently cited as proof that vaccines cause autism or play a part in the disorder, despite the fact that it was retracted. The BMJ calls the study “fatally flawed both scientifically and ethically” in a new editorial.
Covering Health has compiled some links to interesting reading on this subject, much of it specifically for journalists.
Update: Seth Mnookin, who has spent two years looking into vaccine scares, has written an interesting post about the topic, including his view that BMJ over-hyped its story, which almost certainly helped drive media coverage. Mnookin also appeared on CNN.
By sending out breathless press releases and prepping the worldwide media for a series of bombshell stories, the BMJ created the impression that this was fundamentally new news – and it wasn’t. We knew that Wakefield’s work wasn’t reliable or accurate on January 3 – and we still know that today. The stories that are currently running are not really all that different in tone or content than the stories that ran almost exactly a year ago, when a UK medical panel found there was sufficient evidence to justify stripping Wakefield of his right to practice medicine.
Background on autism from Pauline A. Filipek M.D., director of the Autism Program for OC Kids Neurodevelopmental Center and associate professor of clinical pediatrics and neurology at the University of California, Irvine, School of Medicine.
Investigating alternative treatments for autism: Trish Callahan & Trine Tsouderos, of the Chicago Tribune, wrote “Dubious Medicine,” a look at the world of alternative treatments for autism, treatments that are often risky and unproven.
Learn how to analyze and write about health and medical research studies with AHCJ’s latest slim guide. It offers advice on recognizing and reporting the problems, limitations and backstory of a study, as well as publication biases in medical journals and it includes 10 questions you should answer to produce a meaningful and appropriately skeptical report. This guide, supported by the Robert Wood Johnson Foundation, will be a road map to help you do a better job of explaining research results for your audience.
Godlee says that researchers updating their Cochrane review of the drug “failedto verify claims, based on an analysis of 10 drug company trials,that oseltamivir reduced the risk of complications in healthyadults with influenza. These claims have formed a key part ofdecisions to stockpile the drug and make it widely available.”
Only after Roche was questioned by the BMJ and Channel 4 Newsdid the manufacturer commit to making “full study reports”available. Godlee says that some questions remain, including how patients were recruitedand why some neuropsychiatric adverse events were not reported.
Godlee argues that “it can’t be right that the public should have to rely on detective work by academics and journalists to patch together the evidence for such a widely prescribed drug,” saying that “Individual patient data from all trials of drugs should be readily available for scientific scrutiny.”
How about when a press release was never issued? None was available when the UK media reported results of a study about the effects of caffeine in pregnancy – before BMJ had a chance to publish online. Just the same, BMJ editor Fiona Godlee is a bit peeved. She acknowledges that, technically, there was no breach, but she maintains coverage still amounted to publicity before publication.
How did this happen? The UK’s Food Standards Agency, which funded the study, held a stakeholders meeting before BMJ issued its embargoed press release. “It was probably from this meeting that the study’s findings, and the government’s new guidelines on caffeine intake during pregnancy, were leaked,” she writes in an editorial. In this case, she continues, there was no harm done – the media got the story right.
Godlee was responding, in part, to an earlier BMJblog post by FSA communications director Terrence Collis, who wrote the agency was less than “delighted” to get its study published in BMJ. Why? The FSA wants to show its research is high quality, but “we are even keener that the advice that reaches consumers is as clear as possible – and gets there as quickly as possible. This makes waiting around for journals to decide whether they are going to publish a real pain.” And that, he acknowledged, left time for leaks.