Just one in every six new stories about medical research contained independent comments from someone besides the study authors — and a quarter of them did not have the relevant clinical or academic expertise to be commenting on the research. Further, just over half of those commenters had relevant conflicts of interest, but only half were reported in the news article. Those are the findings of a sobering, though unsurprising, new study that reveals just how much news consumers suffer from a dearth of high-quality reporting on medical research.
In plainer terms, health journalists need to be doing a better job when reporting on medical research.
Given the imperfect process of the scientific method, the need for replication of research findings, and the fact that many research findings are simply wrong, it is essential that reporting on medical research provide context on the topic and study findings. It’s also critical to include analysis of the findings from experts qualified to assess them.
Without such expert analysis, readers get with the confusing, unhelpful and potentially harmful experience I call the “coffee — red wine — chocolate problem.” That is, each of these (delicious) substances apparently will cause or prevent cancer – and cause an early death or extend our lives – depending on the study and the day.
The reality is that we see that kill-you-today-save-you-tomorrow ebb and flow of medical research headlines because it depends on how a study was conducted, what researchers were looking at, how well the work was done, and whether the findings have been replicated elsewhere before we actually “know” something to be “true.” Our readers do not always realize this. It’s our job to make it easy for them to understand what a study means as opposed to just what it “says.”
The methods and findings of a new study published in the Canadian Medical Association Journal (CMAJ) offer insight into what professionals in the field consider adequate clinical or academic expertise to be commenting on a study. The authors defined clinical expertise as practicing in the discipline directly relevant to the source article. They define academic expertise as having coauthored at least five papers in the four years before the publication of the study they commented on, as long as those studies assessed either the exposure/intervention or the main outcome of the study they commented on.
The study’s objective was straightforward:
“We examined how frequently commenters in news stories about medical research have relevant expertise and have academic and financial conflicts, how often such conflicts are reported and whether there are associations between the conflicts and the disposition of the comments toward the findings of the source research [positive, mixed/neutral or negative].”
The authors began by looking at 591 news stories about 131 published medical studies from high-impact journals such as New England Journal of Medicine, JAMA, The Lancet, PLoS Medicine, JAMA Internal Medicine, the BMJ and Annals of Internal Medicine.
“High impact” refers to the journal having a high impact number. Studies in these publications tend to be the most commonly included in embargoed news releases and reported on in the mainstream media. (As previously discussed on this blog, there are varying perspectives on the utility of impact numbers for journalists.)
Sadly, only 16 percent of these studies contain independent comments, according to the study. Authors of the CMAJ study focused on these 92 news articles and the 104 comments they contained from independent sources, as well as 21 journal editorials where the authors had relevant academic or clinical expertise.
The findings related to expertise, conflicts of interest and reporting of conflicts of interest are worth a close reading for all health journalists. The upshot is that independent comments frequently are not mentioned in news coverage about medical research, and even when included, conflicts of interest by the commenters often are not.
Further, 97 percent of the comments were positive when the commenter had a conflict of interest that was “congruent with the research findings.” When not congruent, only 16 percent of the comments were positive, a finding that suggests (as has been shown in previous research) that independent commentary may be less reliable if the commenter has conflicts of interest. It is one of many lessons familiar to those familiar with the 10 criteria that HealthNewsReview.org uses to assess the quality of news reporting. Criterion #6 specifically asks, “Does the story use independent sources and identify conflicts of interest?” (See HNR founder Gary Schwitzer’s discussion of this study.)
Ideally, every news story about medical research should have an outside expert whose experience and expertise are specific to the study topic. Every outside commenter’s potential conflicts of interest, as well as those of the study authors (whether they are interviewed or not), would be included in the story.
In practice, this obviously doesn’t happen in every story, and many journalists and editors would argue that it is logistically nearly impossible to do so. Not all conflicts of interest are financial, and ideological ones are harder to identify. There’s not always room to include all conflicts of interest in a story, and there can be disagreement about what qualifies as a conflict. Aside from this, the fast-paced and deadline-based nature of journalism makes it challenging to find and include commenters all the time — although it can be argued that online stories are easier to update with comments later (as I have done myself).
I admit that I don’t always include outside commenters, or all conflicts of interest, in every story I write, for a variety of reasons that will be familiar to my colleagues. Even given those realities, these findings are pretty unsettling and unacceptable. The CMAJ study does not include which publications they looked at, but journalists at all publications should strive to produce the highest quality reporting. That includes independent critical analysis, the disclosure of conflicts of interest, and inclusion of commenters without conflicts. Otherwise, we are not fulfilling our responsibility to inform readers about new medical research. Inadequate misinformation can be as dangerous as misinformation when people rely on our work to make health and medical decisions.