Most of the time, journalists who cover medical research are working to ensure that the stories they write are accurate and thoroughly reflect the findings and caveats of new studies. But what happens when the journal editors themselves are misled regarding a study’s data?
Do journalists who covered the study with the dubious data have a responsibility to follow up and write another story, or to update the older story for people who find it online? There’s no simple answer to that question, but it’s certainly important for journalists to at least do what they can to keep up with these developments. Then they can determine whether an additional story or update to an earlier story is necessary.
Retraction Watch is one of the best places where journalists can keep an eye on such issues, but sometimes articles aren’t retracted or the potential problems fly under the radar. For example, a recent story in the New York Times by Katie Thomas illustrates the value of reading medical-related legal proceedings very carefully.
The tip-off was in a footnote buried in a legal briefing. A group of patients are suing Johnson & Johnson and Bayer over the safety of Xarelto (rivaroxaban), a top-selling blood-thinning drug used to prevent stroke in patients with atrial fibrillation and to treat blood clots such a deep vein thrombosis. The Food and Drug Administration is currently investigating whether the data leading to Xarelto’s approval may have been tainted by a blood-testing device that malfunctioned and subsequently misreported patients’ bleeding risk during trials comparing Xarelto to warfarin, an older and commonly used blood thinner. The device’s malfunctioning may have caused patients to receive too much warfarin, which would have made results from Xarelto look better by comparison.
In a letter to the New England Journal of Medicine, published February 25, Duke researchers reported findings of a re-analysis of the head-to-head rivaroxaban-warfarin trial. They reported that rivaroxaban remained non-inferior to warfarin, and that the malfunctioning device hadn’t effected the safety or efficacy findings of the trials. But the research letter left out other lab data that would have provided potentially more accurate results in their re-analysis — despite a peer reviewer’s request for the data. Bayer and Johnson & Johnson, meanwhile, were providing those data to the FDA, Thomas reported.
So did the malfunction affect the trial results or not? A later European analysis of the lab data also said it probably hadn’t, but, as Thomas reports, the incident recalled the Vioxx scandal in which Merck appeared to influence the final data submitted for publication. Were the Duke researchers influenced by the companies funding the trial? Both sides deny any collusion, but the sequence of events drives home the importance of journalists including potential conflicts of interest in stories about medical research and new drugs. It’s impossible to know how much influence did or didn’t occur, and certainly a study’s findings aren’t unreliable simply because they’re industry-funded.
At the least, however, readers should know what conflicts might exist, and journalists may want to follow-up on earlier stories if and when the information they relied on changes.