That holds true even when the announcement comes from the National Institutes of Health. The NIH’s release about their Systolic Blood Pressure Intervention Trial (SPRINT) in September set so many red flags waving that they could have held a parade.
The news trumpeted by the NIH was that keeping blood pressure at 120mm Hg could cut the risk of heart attacks by 30 percent and the risk of any cardiovascular deaths by 25 percent. But as Kathlyn Stone noted at HealthNewsReview.org, the NIH statement was “unusual” because “the agency didn’t release any of the evidence, statistics or even which kind of drugs were used in the trial.”
Basically, they made a sweeping claim that journalists were expected to take at face value. As Melinda Wenner Moyer pointed out for Scientific American, the “slick” NIH release “provided hype but little substance.” Because the trial is neither peer-reviewed nor even published yet, Moyer said the study investigators can dodge important follow-up questions about the study details because of the Ingelfinger rule, which prevents researchers from discussing too many details of research not yet published in a peer-reviewed journal.
Among the omissions Moyer noted:
- The release didn’t “emphasize that the findings may only apply to a limited segment of the population” (The study focused on high risk adults.)
- It didn’t explain why the findings contradict previous trials’ results
- Most trial subjects had to take multiple blood pressure meds, but the release did not mention the risks of multiple drugs, or that some adults need up to four meds to bring their blood pressure down to the study’s threshold.
- National Heart, Lung, and Blood Institute Director Gary Gibbons “apparently gave no indication of either the absolute reduction in events or the number needed to treat — statistics that we think are vital to understanding the magnitude of the benefit.”
- “And there was nothing in the news release (or apparently in the news conference) about the potential harms of treating to a lower target.” It’s a criticism similar to Moyer’s point about the risks of multiple drugs.
Lomangino gives credit to doctors who called out the hype on Twitter, but notes that The New York Times heralded the findings, offering little more information than the press release. (Ivan Oransky, M.D., has often said — and I agree — that it’s journalistic malpractice to rely only on a press release when reporting.)
It’s possible that the hype is justified, that taking whatever steps necessary to drop one’s blood pressure to 120 can really result in substantially lower risk for cardiovascular events, or at least for those adults at the highest risk. But there’s not yet the data to show it. To drive this point home, Moyer provides an excellent example to make the study’s limitations particularly concrete and relevant for readers:
“According to the Framingham risk calculator, a 55-year-old woman with borderline high blood pressure and cholesterol has only a one in 100 chance of having a heart attack in the next 10 years — so compared with the SPRINT subjects, she is at extremely low risk. Considering that the SPRINT patients who reached the target number of 120 had to take, on average, three different blood pressure drugs to do so — diuretics, calcium channel blockers and ACE inhibitors, among other options — should this low-risk woman really take a bunch of drugs to get her systolic pressure down to 120 so that she can reduce her risk of a heart attack by 30 percent, from one in 100 to one in 143? Among those at low risk for heart problems, drug-related risks — which include kidney problems and abnormally slow heart rate — could trump potential drug benefits.”
The NIH posted its own Q&A about that trial that also neglected to address any of the critical questions journalists should ask before reporting these findings such as these. Fortunately, the post did note, multiple times, that the results were “preliminary,” and announced that trial investigators would soon post “a description of the treatment approaches that were used in the study.”
Why hype a study to the press as “landmark,” then downplay it elsewhere? Trying to have it both ways raises even more questions about the study, and more journalists should be pointing that out.