Tag Archives: reporting on studies

Diet soda and stroke: A reminder on reporting preliminary studies

Over at the NeuroLogica blog, Steven Novella, M.D., explores the absurdity of a poster presentation at the American Stroke Association International Stroke Conference, one obviously intended to showcase raw preliminary research on soda consumption, somehow making headlines in health and science sections around the world. The poster in question, of course, was one that reported a 61 percent increase in self-reported cardiovascular disease among those who drank diet soda versus those who drank no soda whatsoever.

It’s the sort of headline that should set off alarms in the subconscious of a health journalist, and Novella is quick to acknowledge its flaws: It was observational, self-reporting, didn’t control for many obvious confounds and only measured a single point in time. However, Novella writes, that doesn’t necessarily make it a bad study. It just makes it a preliminary one. One suited for something like a poster presentation. Which is exactly what it was. (Emphasis mine.)

While this study has serious flaws that preclude any confident interpretation, it is a reasonable preliminary study – the kind of study that gets presented as a poster at a meeting, rather than published in a high-impact peer-reviewed journal. Such preliminary research is mostly an exercise in data dredging – looking at data sets for any interesting signals. The purpose of such preliminary research is to determine whether or not more definitive follow up research is worth the time and effort. If there were no signal in this data, then don’t bother designing and executing a tightly controlled several year prospective trial.

So if the study was just a necessary but early step in the research process, how did things get so out of hand? Frequent consumers of health media criticism already know what’s coming next.

The problem is in the reporting of these studies. The mainstream media probably should just ignore any study that is deemed preliminary, especially if it’s just an isolated study. Perhaps in a thorough feature article it would be reasonable to give an overview of the state of the research into a question, including preliminary studies, because in a feature time can be taken to put the evidence into perspective. But reporting a single preliminary study as science news is a highly problematic approach.

Novella points out that people who saw stories about the study – even those stories that included reasonable reporting – will likely just remember the headline linking diet sodas and stroke risk. He also reminds us that “Over-reporting of preliminary results also has the effect of confusing the public with lots of noisy information, most of which is not true.”

Online guide focuses on covering medical studies

Covering Medical Research

Reporters are inundated with lures to cover the latest medical study or scientific conference paper. And there are some significant milestones being reached in medical research. But, more often, the information reaching the public is way too preliminary or even misleading, say those behind a new AHCJ reporting guide on covering health studies.

The guide will help journalists analyze and write about health and medical research studies. It offers advice on recognizing and reporting the problems, limitations and backstory of a study, as well as publication biases in medical journals and it includes 10 questions you should answer to produce a meaningful and appropriately skeptical report. This guide, supported by the Robert Wood Johnson Foundation, will be a road map to help you do a better job of explaining research results for your audience.

New guide focuses on covering medical studies

Reporters are inundated with lures to cover the latest medical study or scientific conference paper. And there are some significant milestones being reached in medical research.

Covering Medical Research

But, more often, the information reaching the public is way too preliminary or even misleading, say those producing a new AHCJ reporting guide on covering health studies.

This guide will help journalists analyze and write about health and medical research studies. It offers advice on recognizing and reporting the problems, limitations and backstory of a study, as well as publication biases in medical journals, and it includes 10 questions reporters should answer to produce a meaningful and appropriately skeptical report.

The guide was written by longtime AHCJ member and HealthNewsReview.org publisher Gary Schwitzer with contributions from Ivan Oransky, M.D., executive editor of Reuters Health and AHCJ’s treasurer.

AHCJ hopes this guide, supported by the Robert Wood Johnson Foundation, will be a road map to help reporters do a better job of explaining research results for their audiences.

It is the fifth slim guide published in this series. Also available:

  • Covering the Health of Local Nursing Homes
  • Navigating the CDC: A Journalist’s Guide to the Centers for Disease Control and Prevention’s Web Site
  • Covering Obesity: A Guide for Reporters
  • Covering Hospitals: Using Tools on the Web

Schwitzer’s year-end thoughts on health journalism

Gary Schwitzer, publisher of HealthNewsReview.org and an associate professor in the University of Minnesota’s School of Journalism & Mass Communication, posted “10 trends in health care journalism going into 2010.” He looks back at the good, the bad, the ugly and the data, as well as looking toward the future and offering a few words of advice for health journalists.

Universities unite to present research to consumers

futurity-map1Thirty-five top universities, including many of North America’s leading research institutions, have banded together to create Futurity.org, a site designed to bypass the media and present their research directly to the public. According to Scott Jaschik of Inside Higher Ed, each institution contributed $2,000 to help get the site started.

Paul Rogers of the San Jose Mercury News, reports that the site will function as a sort of social media wire service, intended to feed cleaned-up press releases to social media sites like Twitter and MySpace as well as news aggregators like Google News. Rogers found that the universities were turning to new media because of what they said was a decline in reliable science reporting.

Curtis Brainard of Columbia Journalism Review points to an important potential issue:

Labeling and transparency, however, are likely to become even greater issues for Futurity once it finalizes its syndication deals with Google News and Yahoo News. If that happens, its posts will be listed online next to similar items from traditional outlets like the Associated Press or The New York Times, making differentiation vitally important.

Study: Weak science in hospital press releases

Reporters often come under fire for subpar or sensationalized coverage of medical science, but a new study by researchers from the Department of Veterans’ Affairs and Dartmouth Medical School indicates that hospital PR departments may be, at the very least, willing accomplices in the dissemination of articles with eye-catching headlines and inflated, out-of-context reports.

From the study (emphasis added):

Academic medical centers issued a mean of 49 press releases annually. Among 200 randomly selected releases analyzed in detail, 87 (44%) promoted animal or laboratory research, of which 64 (74%) explicitly claimed relevance to human health. Among 95 releases about primary human research, 22 (23%) omitted study size and 32 (34%) failed to quantify results. Among all 113 releases about human research, few (17%) promoted studies with the strongest designs (randomized trials or meta-analyses). Forty percent reported on the most limited human studies—those with uncontrolled interventions, small samples (<30 participants), surrogate primary outcomes, or unpublished data—yet 58% lacked the relevant cautions.

The study says that these press releases tend to dictate which studies receive attention and how those studies that do receive coverage are reported on. In support of that assertion, the authors cite findings that about a third of medical stories rely heavily on press releases.

And because, as we all know, no report on a scholarly study would be complete without information on context and methodology, here’s how the authors selected their 200-press-release sample:

We selected the 10 highest-ranked and 10 lowest-ranked of the academic medical centers covered in U.S. News & World Report‘s medical school research rankings that issued at least 10 releases in 2005. In addition, we identified each medical school’s affiliates by using an Association of American Medical Colleges database. The Appendix Table lists the centers and their affiliated press offices. We initially intended to compare press releases by research ranking, but because we found few differences, we report findings across the entire study sample, highlighting the few differences by rank where they exist.

Other interesting points from the study:

  • 26 percent of the releases exaggerated the study’s importance (it happened more often in animal research)
  • 24 percent used the word “significant,” though only one distinguished between statistical and clinical significance
  • The study’s authors don’t explicitly blame hospitals for feeding reporters sketchy information, though they do speculate that this could be a contributor to poor health reporting
  • Authors suggest that “the quickest strategy for improvement would be for centers to issue fewer releases about preliminary research, especially unpublished scientific meeting presentations.”
  • They also recommend “centers should limit releases about animal or laboratory research. …  Two thirds of even highly cited animal studies fail to translate into successful human treatments.”

We’d like to hear from our readers – what press releases have you seen that were overblown or out of context? And, for those of you who work in hospital PR, what you do to ensure your releases don’t fall into these traps?

Update

Jennifer Huget of The Washington Post‘s Checkup blog posted about this study, concluding that journalists need to read the studies and not rely upon press releases.