Tag Archives: press releases

Caveats about causality in medical studies linked to more accurate news coverage

Tara Haelle

About Tara Haelle

Tara Haelle (@TaraHaelle) is AHCJ's medical studies core topic leader, guiding journalists through the jargon-filled shorthand of science and research and enabling them to translate the evidence into accurate information.

Photo: Jacob via Flickr

It’s a well-worn mantra: Correlation does not equal causation. But even if we know this, is it always accurately and responsibly reflected in our stories and headlines?

It can be simpler and more elegant to say “Vodka causes sexually transmitted infections” in a headline than “Vodka consumption associated with increased risk of sexually transmitted infections.” (Note: This is not a real headline or based on a real study.) But in this made-up example, it’s laughably obvious that vodka itself does not cause STDs. Continue reading

Controversy over blood pressure trial demonstrates danger of relying on press releases

Tara Haelle

About Tara Haelle

Tara Haelle (@TaraHaelle) is AHCJ's medical studies core topic leader, guiding journalists through the jargon-filled shorthand of science and research and enabling them to translate the evidence into accurate information.

When the headline on a news release begins with “landmark” and includes the words “lifesaving,” “greatly,” and “milestone,” a good health reporter’s Spidey sense should tingle.

That holds true even when the announcement comes from the National Institutes of Health. The NIH’s release about their Systolic Blood Pressure Intervention Trial (SPRINT) in September set so many red flags waving that they could have held a parade. Continue reading

Recent reporting relied on incomplete releases

We all know that press releases are, by definition, designed to highlight positive news. And that’s being generous. But a pair of rather skimpy releases issued by biopharma companies over the past few days takes the notion of spin to a new and troubling level.

In the first case, Medarex last weekend issued a release saying a Mayo Clinic study of two men found that its experimental prostate cancer drug had their tumors shrink dramatically. Oddly, as the Associated Press noted, there was no info on how many people participated in the trial, whether other patients improved or worsened, how long they had survived or whether the study is an early, midstage or late-stage trial. Moreover, the two men still had to undergo surgery to remove the remaining tumors. Just the same, Medarex stock popped more than 20 percent on the news.

A couple of days later, Chroma Therapeutics issued a release touting an alliance with GlaxoSmithKline to use its technology to discover and develop four compounds to treat inflammatory disease. The deal has the “potential” to generate more than $1 billion if all four programs are successful, but no other figures concerning milestone and option payments were mentioned. As The In Vivo Blog pointed out, that $1 billion is highly contingent on all sorts of hurdles being cleared. But how often does anything go according to plan? Nonetheless, the eye-popping $1 b figure was duly noted by various mainstream and trade media.

We asked Reuters‘ Ben Hirschler for his take on the Chroma story and he wrote us that reporting such announcements is a “perennial issue, because the smaller partner routinely wants to play up the maximum figure, while details of the terms are often not disclosed. We always stress the number is a potential payout that depends on success in development and payments could be years away. You could, as you say, argue such figures are spin … since it is unlikely the company will actually hit the jackpot and get the top payout. But … reporting the ”up to” figure (alongside the much smaller upfront element) can be useful for investors, because it gives an indication of how significant any successful drugs from a collaboration could be, if everything goes right.”

In these two cases, however, vital info was missing from the press releases, which made it virtually impossible to make useful comparisons. Meanwhile, the companies still succeeded in getting across their bottom-line messages and impressions. We’re not suggesting that such press releases should be ignored. But companies need to be held accountable for such omissions or journalists may find these gambits will become an accepted practice.

Wrestling with the FDA recall e-mail avalanche

Andrew Van Dam

About Andrew Van Dam

Andrew Van Dam of The Wall Street Journal previously worked at the AHCJ offices while earning his master’s degree at the Missouri School of Journalism.

NPR’s April Fulton recently blogged about a phenomenon familiar to anyone with a subscription to the FDA’s recall e-mail list, or their RSS feed, or their Twitter account: a late rush of random recall messages that would require a prohibitive amount of time to sort and research. fda-recallsFor example, in a two-minute span on June 15, @FDArecalls on Twitter buzzed with messages about multivitamin labels, fish, organic chocolate peanuts, white peppers and soy sprouts. Fulton also notes that many of the notices come out late in the day.

She proposes some sort of flagging or rating system to make it easier to figure out which stories are big deals and which aren’t. She may be on to something. The FDA could make these releases more accessible and useful for journalists and consumers. At the very least, it should be possible to explain the location and magnitude of the public health danger in a way that could be understood at a glance.

What other tips or tricks help you figure out which recalls are relevant to your readers? Do you have suggestions as to how the FDA could makes its releases more accessible or useful? Let us know.

Study: Weak science in hospital press releases

Andrew Van Dam

About Andrew Van Dam

Andrew Van Dam of The Wall Street Journal previously worked at the AHCJ offices while earning his master’s degree at the Missouri School of Journalism.

Reporters often come under fire for subpar or sensationalized coverage of medical science, but a new study by researchers from the Department of Veterans’ Affairs and Dartmouth Medical School indicates that hospital PR departments may be, at the very least, willing accomplices in the dissemination of articles with eye-catching headlines and inflated, out-of-context reports.

From the study (emphasis added):

Academic medical centers issued a mean of 49 press releases annually. Among 200 randomly selected releases analyzed in detail, 87 (44%) promoted animal or laboratory research, of which 64 (74%) explicitly claimed relevance to human health. Among 95 releases about primary human research, 22 (23%) omitted study size and 32 (34%) failed to quantify results. Among all 113 releases about human research, few (17%) promoted studies with the strongest designs (randomized trials or meta-analyses). Forty percent reported on the most limited human studies—those with uncontrolled interventions, small samples (<30 participants), surrogate primary outcomes, or unpublished data—yet 58% lacked the relevant cautions.

The study says that these press releases tend to dictate which studies receive attention and how those studies that do receive coverage are reported on. In support of that assertion, the authors cite findings that about a third of medical stories rely heavily on press releases.

And because, as we all know, no report on a scholarly study would be complete without information on context and methodology, here’s how the authors selected their 200-press-release sample:

We selected the 10 highest-ranked and 10 lowest-ranked of the academic medical centers covered in U.S. News & World Report‘s medical school research rankings that issued at least 10 releases in 2005. In addition, we identified each medical school’s affiliates by using an Association of American Medical Colleges database. The Appendix Table lists the centers and their affiliated press offices. We initially intended to compare press releases by research ranking, but because we found few differences, we report findings across the entire study sample, highlighting the few differences by rank where they exist.

Other interesting points from the study:

  • 26 percent of the releases exaggerated the study’s importance (it happened more often in animal research)
  • 24 percent used the word “significant,” though only one distinguished between statistical and clinical significance
  • The study’s authors don’t explicitly blame hospitals for feeding reporters sketchy information, though they do speculate that this could be a contributor to poor health reporting
  • Authors suggest that “the quickest strategy for improvement would be for centers to issue fewer releases about preliminary research, especially unpublished scientific meeting presentations.”
  • They also recommend “centers should limit releases about animal or laboratory research. …  Two thirds of even highly cited animal studies fail to translate into successful human treatments.”

We’d like to hear from our readers – what press releases have you seen that were overblown or out of context? And, for those of you who work in hospital PR, what you do to ensure your releases don’t fall into these traps?

Update

Jennifer Huget of The Washington Post‘s Checkup blog posted about this study, concluding that journalists need to read the studies and not rely upon press releases.