Implementing the ’10 criteria’ in your medical research reporting

Share:

This tip sheet is based on an interview with Gary Schwitzer, publisher and founder of HealthNewsReview.org, who, with his team, evaluated the quality of media coverage of medical research for more than a decade. The blog portion of the site ceased regular publication in December but remains a valuable resource for health care journalists, especially its 10 criteria for covering medical research. I bolded some of his comments that I found particularly helpful, and summarized a few points for those skimming.

Q: What are the best practices for covering the costs of interventions, especially given the infamously challenging lack of transparency in the U.S. health care system?

A: First, when we report that 70 percent of the 2,600 stories we’ve systematically reviewed get unsatisfactory grades on the costs criterion, we are holding the bar very low. If anyone even makes a stab at acknowledging costs — in any of the various ways of framing it — we give them a satisfactory score. So the 70 percent that don’t reach that bar are simply not acknowledging cost at all. If you read any collection of our reviews, you’ll see the various ways the reviewers thought the cost issue could have been handled in those specific cases. For example, here’s one where the reviewers basically said, “Hey, your competitor reported that the list price of the drug is nearly $400,000 a year.  Why didn’t you?”

In this review, there was no discussion of the potential costs of an Alzheimer’s vaccine. But our reviewers wrote: “It might be difficult to estimate the cost of a vaccine that hasn’t yet been tested in humans. However, the story could have pointed out that at least some researchers have begun to consider the financial impact of treatments that are in development to slow or delay Alzheimer’s.

For example, a British study released in March estimated that a hypothetical vaccine given to everyone over 50 that delayed the onset of Alzheimer’s by five years would yield a savings of about $9,000 in health, social care, and unpaid care costs over a person’s life. It estimated that the justified cost of such a vaccine if it had to be given every two years, would be $1,175 per dose. However, those figures would drop to a lifetime savings of $2,200 and a justified cost of $293 per dose if the vaccine delayed onset of disease by just one year.”

And here’s a Keytruda story that didn’t mention costs. How can you not mention the costs of a class of drug estimated to be $150,000 a year?

For early trials or preliminary research, it is difficult to estimate the costs of an experimental approach early in its development. But a news story can at least cite costs of existing alternatives. If the new approach is comparable to other approaches, then cite the cost of the alternative methods. Our bottom line: If it’s not too early to talk about the benefits of a new intervention, it’s not too early to talk about costs.

We also know that some interventions may cost more in the short term yet are cost-effective in the long-term because they reduce the need for expensive care down the road. While these longer-term costs may be difficult or impossible to pin down, we typically reward stories that make a reasonable attempt to discuss the implications.

We offer a collection of tips and resources for reporting on costs.

Summary: At least try to quantify the costs of an intervention — or its alternatives — in some sense, even if you can’t get specifics or capture regional or payer-related differences. At the very least, even a cursory Google search may turn up a study or report that attempts to consider an intervention’s costs.

Q: How do you recommend quantifying outcomes and harms if absolute numbers aren’t available, researchers are unable or unwilling to provide them or if the benefit/harms are expressed only in obscure statistical terms, such as correlation coefficients or standard deviations?

A: I can’t move beyond the point of a researcher not providing such data. That’s where I’d make a stand, with the researcher, the researcher’s institution, and any journal that would publish a manuscript without insisting that such data would be provided. And if an individual journalist can’t fight that battle, then that’s where organizations like AHCJ and NASW [National Association of Science Writers] should rally to the call. Why is it worth making a stand? Because a journalist’s job is to adequately inform readers. And you can’t do that if you’re dealing with “obscure correlation coefficients.”

Q: What is the most responsible way for journalists to determine the most relevant outcomes to focus on in their reporting, given that the primary endpoints may not be the most clinically relevant?

A: To go beyond primary endpoints requires that the journalist provide adequate context and caveats about what secondary endpoints mean—and what they may not mean—for the audience. Secondary findings need to be reported cautiously, as Mary Chris Jaklevic noted. They do not have the same statistical authority as primary findings and are more likely to be due to chance. It’s been stated that secondary findings should only be used to help interpret the primary result of a trial or to suggest avenues of further research. Yet the news release reviewed here featured those rosy-sounding secondary findings at the top, with wording that made them sound like proven benefits.

Q: What is a journalist’s responsibility when other publications are reporting a “big story” based only on a press release or otherwise extremely limited data and you’re under pressure to write your own story?

A: This is an issue that forces journalists to be sure that this conversation takes place early and often, if necessary, in one’s employment and to think about whether they’re comfortable with the editorial culture in which they’re working at the time. A journalist needs to have the freedom to dare to be different and to say, “I’m not going to report simply from a news release or with extremely limited data without at least making that a major part of the story.” And that journalist needs to be supported in that approach by editorial management. Or you change jobs (which I did on several occasions because of discomfort with management).

A journalist’s responsibility is to the audience. Ben Bagdikian, the journalist-educator-media critic, nailed it: “Never forget that your obligation is to the people. It is not, at heart, to those who pay you, or to your editor, or to your sources, or to your friends, or to the advancement of your career. It is to the public.”

Q: How does a journalist know when they’ve gotten sufficient independent outside commentary on a particularly thorny or controversial topic/study/intervention?

A: Several of the questions you are asking warrant a comment on the importance of the depth and quality of your contacts list. It requires a journalist to know enough to be able to say that the sources interviewed are representative of the big picture range of viewpoints on the issue at hand. Do you know the field and the topic well enough to judge when enough is enough? If not, seek the counsel of people who can guide you: colleagues, the AHCJ listserv, editorial authors, digging deeper into your contacts list, etc.

I can’t set a number on when enough is enough. But I can set a number on what is unacceptable: having no independent perspectives on a thorny or controversial topic is unacceptable, yet we’ve seen it so often. (On the double-barreled criterion asking “Does the story use independent sources and identify conflicts of interest?”, only 52 percent of the 2,600 stories we’ve reviewed got a satisfactory score — a coin toss about whether readers will get what they need or not.)

When I wrote AHCJ’s Statement of Principles in 2004, I wrote, “Recognize that most stories involve a degree of nuance and complexity that no single source could provide. Journalists have a responsibility to present diverse viewpoints in context. In addition, anyone with knowledge of the health care industry, of medicine and of the scientific community knows that many vested interests reside among government health spokespersons, researchers, universities, drug companies, device manufacturers, providers, insurers and so on. To reflect only one perspective of only one source is not wise. Most one-source stories lack depth and meaning. Avoid single-source stories.”

But I also wrote: “Be vigilant in selecting sources [and] asking about, weighing and disclosing relevant financial, advocacy, personal or other interests of those we interview as a routine part of story research and interviews.”

Summary: Something is better than nothing, but learn your topic well enough to be able to determine when enough is enough, or seek the advice of those who do.

Q: How can reporters identify and reduce the influence of ideological bias among their sources?

A: I think it’s the same approach as outlined above. How good is your contacts list? How good is your grasp of the topic? Beat the bushes to ask sources you trust about known or perceived ideological biases in the people you’re interviewing. Of course, following the literature over time will reveal clues about biases.

Q: How do you draw the line between disease-mongering and the rare need to explore a genuinely under-covered, under-recognized, under-diagnosed or otherwise under-acknowledged condition, problem or phenomenon?

A: It’s not the rarity of a condition that renders it prone to disease-mongering. It’s the language and context used (or not used) to describe the condition that puts in the disease-mongering category. Check out our explanation of the disease-mongering criterion. It begins with the different forms of “mongering,” including the following:

    • Turning risk factors into diseases, with the implication that, then, these must be treated (e.g., low bone mineral density becomes osteoporosis);
    • Misrepresentation of the natural history and severity of a disease (e.g., early-stage low-grade prostate cancer);
    • Medicalization of minor or transient variations in function (e.g. temporary erectile dysfunction);
    • Medicalization of normal states (aging, baldness, wrinkles, shyness, menopause);
    • Exaggeration of how common a disorder is (e.g., using rating scales to ‘diagnose’ chronic dry eye; see “not satisfactory” story examples below).

Identifying disease mongering is a matter of judgment. Sometimes it’s obvious. Sometimes there’s a fine line about whether an article on irritable bowel disorder, erectile dysfunction, restless leg syndrome or osteoporosis (all of which can be serious for some sufferers) is misrepresenting the condition to the public. If the language used misrepresents the condition to the public, you’ve crossed the line.

Q: Could you share some guidance and wisdom for reporters new to covering medical studies who may feel overwhelmed or intimidated about meeting all the criteria you lay out for writing about a study?

A: First, this isn’t a cookbook. It’s not like your soufflé will be ruined if you don’t address all of our criteria all the time. These are meant to be helpful guidelines — reminders —of what our experience shows are the questions that readers need answered when hearing claims about health care interventions. So we shouldn’t be talking about feeling “intimidated” but, rather, challenged to think about these criteria and how to address them. And, if stumped, it challenges one to find help from people who know this stuff better.

But veteran health care journalists need help, too. Years ago a veteran, respected wire service reporter, told me she always had our HealthNewsReview.org mouse pad (listing our 10 criteria) by her side to help remind her to address things that otherwise were easy to forget, such as costs and alternatives. An editor I respect once offered writers a bonus if they got a 5-star score from one of our reviews, indicating that most of the criteria had been adequately addressed.

Second, there’s a much broader issue here. HealthNewsReview.org is simply the only U.S. project that has put a stake in the ground to make the case that these are 10 criteria that we think patients need addressed. We’ve never claimed it’s a perfect list. We adopted it from an Australian team 13 years ago. Other teams in Canada, Germany, Hong Kong and Japan also adopted them. Now only we and the Germans are left standing. Soon it will be down to one. So a lot of people internationally have thought this is the right approach. But I always ask journalists, “If not these criteria, then which do you use?” I have never received a meaningful answer when I’ve asked that question.

What kind of health care journalist are you going to be? What guidance is there to help you achieve that goal? If there’s even a single reminder or two from our 12-plus years of work that can make its mark on you, terrific. If you even stop to ask these questions, you’re well on your way to being the kind of health care journalist that your readers need you to be.

Even though we ceased daily publication at the end of 2018, I intend to keep the website alive and accessible for at least three years. All of the 6,000 pieces that we have published (news story reviews, news release reviews, blog posts, primers-tips-resources, podcasts, etc.) will still be available. And the lessons in those articles will remain as valid in the future as they are now.

Q: What is the single biggest or most common mistake you see journalists make in quantifying outcomes?

A: Being sucked in by researchers or journal articles or PR news releases that frame things in the most positive light. That results in conveying to readers that certainty exists when it simply does not. It’s often the use of relative, not absolute, risk reduction data. It’s often reporting on surrogate endpoints without explaining the limitations. Our toolkit offers journalists primers on these topics and many more. In the end, we have documented a very clear pattern of emphasizing or exaggerating potential benefits while minimizing or totally ignoring potential harms. That is a recipe for misleading the public, and for causing harm. It’s avoidable harm and avoidable ignorance that we’ve tried to attack every day for our 12-plus years.

Additional guidance can be found in AHCJ’s slim guide, “Covering Medical Research.”

Tara Haelle

Tara Haelle is AHCJ’s health beat leader on infectious disease and formerly led the medical studies health beat. She’s the author of “Vaccination Investigation” and “The Informed Parent.”

Share:

Tags: