Are we nearing the end of traditional medical journal articles?

Photo: National Eye Institute via Flickr

Photo: National Eye Institute via Flickr

Lots of challenges have faced medical publishing as the Internet has evolved. From predatory journals to the rise of open access journals to the simple fact that the stacks and stacks of physical paper journals are depleting, removing a long-time key funding source.

In one recent article – ironically enough in the journal Circulation: Cardiovascular Quality and Outcomes – Harlan M. Krumholz, M.D., describes nine “deficiencies in the current model that fuel the sense that journals as we have known them are approaching their final act.”

For the time being, journal articles still form the bedrock of journalistic reporting on medical research. But the challenges Krumholz describes – very familiar to journalists – lead him to the million-dollar question: “The question for all of us in medical publishing – and for those who consume medical knowledge – is how that would best be accomplished in a new world that is flat, digital, and transparent.”

While Krumholz doesn’t really offer any solutions to the problems he discusses, a quick review is beneficial for journalists. It can help us avoid the pitfalls associated with too much reliance on studies, and not enough skepticism about where the data comes from and how it’s being presented. Consider his list:

Too slow: Years can pass between the time a study concludes to the point it shows up in a journal, even with the somewhat shortening effect of online releases before an official paper publication. Journalists seeking the most up-to-date information on a particular condition, treatment or trend already are familiar with the frustrating gap between “today” and the most recent data available. A gap of five or more years is common. Four years is a bonus. Two or three years is a miracle. And for patients wondering if the most recent drug will work for them or not, the time it takes to gather sufficient safety and effectiveness data can be maddening, yet relies on the current publishing model.

Too expensive: The reason for high paywalls – allegedly – is the high cost of maintaining a web presence and journal editors, staff and sales personnel, which Krumholz says is increasing. We already know that funding for the studies has dropped. Krumholz suggests that medical knowledge will be considered “a social good” in the future, which could lower cost barriers, but that sounds pretty optimistic and doesn’t help us much right now.

Too limited: Any journalist regularly covering medical studies knows that a lot of data is left out of each study, which contains only about 3,000 to 5,000 words. In an age of seemingly limitless online storage, there’s not really an excuse for cramming an entire study’s worth of findings into eight to 12 pages.

Too unreliable: There’s no shortage of critiques about the peer review process. While Krumholz doesn’t discuss the problem of conflicts of interest (both commercial and ideological), it’s still a problem without a clear solution. While some have argued for an ongoing, public and transparent peer review process, this approach also has its flaws.

Too focused on metrics: In the world of medical publishing, the impact factor rules the roost for determining “quality.” But pointing out the flaws of relying too heavily on metrics for assessing articles is preaching to the choir for journalists who face similar challenges.

Too powerful: Krumholz compares the journal article acceptance process to that of college admissions, with a similar level of subjectivity which places editors “in a remarkably powerful position.” Publication “can transform a career or influence millions of dollars or more in sales of a product,” he points out, and this influence may best distributed more broadly. Broader distribution might mean less opportunity for bias or corruption complicating journalists’ reporting, but it also could mean a lot more stuff to sift through.

Too parochial: Publication bias is a well-established concern in research, but Krumholz brings up another aspect that gets less attention: too little diversity in terms of language, nationality, gender, and race/ethnicity. This flaw presents one of the greatest challenges for journalists writing for a broad audience. Simply being aware of the limitation can help guide journalists in deciding which questions to ask researchers about their findings, and in looking for additional information to add context to their reporting.

Too static: As the website Retraction Watch has shown, a lot of articles get retracted and rarely for consistent reasons. Instead of a “living document,” a journal article “is not interactive and has no capacity for iterative change spurred by input from the larger audience,” Krumholz writes. Changing this model doesn’t sound easy, but it may offer journals a wealth of better information in trying to make sense of findings and determine how reliable they might be.

Too dependent on a flawed business model: One of the most relevant points Krumholz makes for journalists are the conflict-of-interest concerns surrounding “hefty advertising revenues” at some journals. It may surprise journalists to learn that journals “rarely, if ever, expose their advertising revenue sources even as disclosure is mandatory for authors,” making it harder to discern the possible factors that play a role in publication bias.

Most of these issues will be familiar to journalists who regularly write about medical research, but this reminder by Krumholz can supply a necessary dose of skepticism journalists need to report as thoroughly as possible.

What to do about these challenges, however, is another beast entirely.

Leave a Reply