It’s a well-worn mantra: Correlation does not equal causation. But even if we know this, is it always accurately and responsibly reflected in our stories and headlines?
It can be simpler and more elegant to say “Vodka causes sexually transmitted infections” in a headline than “Vodka consumption associated with increased risk of sexually transmitted infections.” (Note: This is not a real headline or based on a real study.) But in this made-up example, it’s laughably obvious that vodka itself does not cause STDs.
A reasonably intelligent reader likely would realize that the first headline actually meant the second one: that drinking a lot of vodka might (by way of increasing the likelihood of intercourse) increase the likelihood of contracting an STD.
But most associations we write about from medical studies aren’t so obvious. Readers may very well interpret a clumsy or irresponsible headline to mean causation when it should emphasize an association. Unless it’s a randomized controlled trial, the vast majority of medical studies cannot show causation — even if the possible mechanism for causation seems obvious and certain.
A recent open-access study in BMC Medicine illustrates the importance of being cautious when assessing and then claiming causality, especially if a journalist is leaning too heavily on press releases. The researchers gathered 312 news releases from nine journals, university and funder press offices and tested two interventions: “(a) aligning the headlines and main causal claims with the underlying evidence (strong for experimental, cautious for correlational) and (b) inserting explicit statements/caveats about inferring causality.”
They used alignment with 64 releases, added caveats about causal statements to 79 releases and did both to 80 releases. The remaining 89 were controls not changed in any way. Researchers then examined headlines, causal claims and caveats in 2,257 subsequent newspaper, website and broadcast stories.
The results aren’t necessarily surprising. News headlines were slightly more frequently aligned with the actual evidence when recommendations were made to adjust the press release (56 percent) compared to control releases (52 percent). The effect was greater in an as-treated analysis that considered the final press releases that ultimately distributed (60 percent vs. 32 percent).
Claims in the news stories also were more likely to be aligned to the evidence (67 percent) when final press releases were aligned accurately compared to published control press releases (39 percent). Likewise, 20 percent of news stories contained specific caveats about causality when the press release did, but none of the news stories that were based on control press releases.
Importantly, the researchers did not find that press releases were any less likely to receive media coverage if they included causality caveats or were adjusted to align correctly with the evidence. In other words, a more accurate, responsible press release is just as likely to get attention as less cautious ones.
While the implications of these findings may seem more relevant for press officers writing releases, they also reveal how much press releases influence coverage, suggesting that journalists need to be conscientious about how much they’re relying on press materials about a study.
The authors included one caveat to their observational study:
“Findings from as-treated analysis are correlational and may not imply cause, although here the linking mechanism between press releases and news is known.”