Lessons from a soda study that lost its fizz

Share:

Last October, Brigham and Women’s Hospital took the unusual step of recalling a press release about a research study.

Just hours before the study’s embargo lifted, Brigham’s press officers asked the researchers to stop giving interviews, and barely half an hour before the story went live, they alerted the press that the study’s data was “weak.”

People involved in the decision say it’s the first time the Harvard-affiliated hospital had ever publicly pulled its support for a study.

Of course, Brigham’s disavowal of the research became the headline instead:

Since Brigham’s decision not to promote the study, I reached out to the hospital’s media team, the study authors, and the editor of the journal that published the study to get their perspectives. They agreed to help me because they want health reporters to understand the pitfalls of promoting science, and how that effort can sometimes veer too far from a study’s actual findings.

Here are lessons from the soda study that fizzled:

Publication in a peer-reviewed journal is not enough.

Before this release, Erin McDonough, vice president of communications at Brigham, says peer-reviewed publication was the only guideline they used when deciding whether they should promote a study.  Now, she says, they’ll set the bar higher.

“We’ll ask a lot more questions going forward,” she says. “Is it clinical or translational? Is there any controversy around it? What’s the strength of the study design? What’s the size of the audience that the science could impact? Are there potential conflicts of interest? Who’s funding it?” McDonough says.

“This is Atul Gawande’s hospital, I feel like we have a checklist for just about everything. I said to my team, ‘I feel like we now need a checklist to make sure that we ask every single question, every time.’”

Checklists help reporters, too. Gary Schwitzer offers his 10-item criteria for good health reporting on his website HealthNewsReview.org. Craig Silverman, author of the Poynter Institute blog Regret the Error, has a free, downloadable accuracy checklist for reporters here.

Get second and third opinions

McDonough first learned that something might be amiss when a scientist who was not involved in the research asked to read the release.

The researcher quickly phoned back to voice concerns about how the study was framed: “It’s like you should be using a tack hammer and this is a sledge hammer.”

Concerned, McDonough says she sent the release to two other researchers. Both agreed the release overstated the study’s significance.

“I don’t regret that we put out a release,” McDonough says. “I regret the intensity of the language we used.”

McDonough said the topic was hot, the material was being published in a peer-reviewed journal and it provided the opportunity to promote the work of an articulate, photogenic scientist.

“So there were so many really exciting elements for us that I think we didn’t really vet the findings themselves.”

Getting independent reviews of study findings is also good practice for reporters. It takes more time, but it’s an important step for balance and context.

Don’t try to construct a narrative where there isn’t one.

After all the sound and fury, I have to admit, I was surprised when I read the actual study. It’s not bad, but it’s also not definitive. It’s the kind of study that should lead to more studies, not raise red flags for consumers.

It’s published in the American Journal of Clinical Nutrition. It draws on data from more than 125,000 men and women who are taking part in the long-running Nurses’ Health Study and Health Professionals Follow-Up Study. Researchers combed through more than 2.2 million person-years of data on their exposures.

The problem is that the observational findings don’t seem to point strongly in any one direction.

Men who drank a can of diet soda every day had slightly but significantly higher risks of certain kinds of blood cancers compared to men who didn’t drink diet soda.

But those risks were also elevated in men who drank regular soda, a finding researchers couldn’t really explain.

And women who drank diet and regular sodas didn’t appear to have higher risks for the same cancers, until researchers combined both sexes to look at the risk of leukemia. In that lumped analysis, the risks went up for both men and women who drank diet soda.

Researchers controlled their data for a wide variety of confounders, but still ended up with wide confidence intervals, suggesting that something else they didn’t take into account was still clouding the associations.

And most of the increases in risk were relatively modest – between 30 percent and 66 percent higher. Many biostatisticians don’t give observational findings much weight until risks at least triple in exposed vs. unexposed people.

Here’s what Walter Willett, M.D., Dr.P.H., a professor of epidemiology and nutrition and one of the study’s senior authors, said about his study in an email:

“I was quite troubled by this as the study was dismissed as ‘weak’. There is an important distinction between a study that is weak because of the design or conduct versus findings that are in a gray zone because they are neither clearly null nor clearly positive. This study was strong as the population was very large, and we had information on the major sources of aspartame since it entered the US market and this information was updated on a regular basis. Any study, no matter how strong, can have findings are not clear, or that have unclear implications. This is usually an area where more research is needed, as I believe is the case in this situation.”

Dennis M. Bier, M.D., AJCN’s editor, said it was appropriate to publish the paper with appropriate cautions about the data:

“Both the editors and the external reviewers ensured that the manuscript’s text adequately and explicitly reflected the possibility that the inconsistent associations found in the study may have been due to chance alone.”

Bier believes reporters and press officers ran into trouble when they tried to construct a coherent narrative around the scattershot findings.

“The bottom line is that one has to read and report accurately what a study actually found (and the caveats discussed in the paper that might limit the conclusions), and not just report what the readership may want to hear or what our biases lead us to believe,” he says. “As Phillip K. Dick, the famous science fiction writer said, ‘Reality is what’s left after we stop believing in something,’ or words to that effect.”

Brenda Goodman