Spin happens: How we cover medical studies affects readers’ attitude toward results

Share:

How much does the way you cover a study matter? If we judge that question on the basis of how your coverage might influence a reader’s opinion about a treatment’s benefit, it matters quite a bit, suggest the results of a recent study in BMC Medicine that examined spin in news stories about clinical studies.

In short, news articles that included spin in their coverage of a study about a particular treatment were more likely to leave readers with a positive impression of the treatment’s benefit.

They defined spin as “a misrepresentation of study results, regardless of motive (intentionally or unintentionally), that overemphasizes the efficacy or overstates safety of the treatment as compared with that shown by the results.”

While their findings may not be surprising, the implications can be far-reaching. “Misinterpreting the content of news stories because of spin could have important public health consequences because the mass media can affect patient and public behavior,” the authors wrote. They note that two thirds of US readers took some kind of follow-up action after reading a health news story, such as discussing it with someone else, posting it on social media, searching for related information online, etc., according to a Pew Research Center report.

This study, which didn’t use outside funding, was a collaboration among a team of epidemiologists based in Paris, AHCJ president Ivan Oransky, HealthNewsReview.org creator Gary Schwitzer and John Novack of Inspire, an online community of more than a million patients and caregivers.

The researchers conducted three online randomized controlled trials to learn how spin in a news story influences readers’ interpretation of study findings. One trial focused on a coverage of a preclinical study (a drug not yet tested in humans), one on a phase I/II non-RCT study and one on a phase III/IV RCT, the last step before a drug application is submitted to the FDA for approval.

The researchers found news stories reporting on these trials that included spin and then rewrote each of the stories to remove the spin. Then they randomly presented one of the two stories to Inspire patients and caregivers and asked “What do you think is the probability that ‘treatment X’ would be beneficial to patients?” The participants answered on a scale of 0 (very unlikely) to 10 (very likely). (The stories were anonymized to remove real names of the publication, journalist, treatment, funders, etc.)

Each of the three RCTs (for each news story pairing) involved 300 participants (900 combined for all three studies) who read either the original story with spin or the revised story without spin.

Those who read the news story with spin about the preclinical studies averaged 7.5 on the scale of how beneficial they perceived the treatment to be, compared to 5.8 for those who read the de-spinned news story. Similarly, the average ratings were 7.6 from those who read the story with spin about the phase I/II trials and 5.8 from those who read the revised story. Those reading the with-spin story about the phase III/IV RCTs rated treatment benefit at 7.2, and those reading the revised (non-spin) stories rated it at 4.9. In all cases, the difference between the mean ratings was statistically significant (P<0.001).

The authors also asked about readers’ impression of the size of potential benefits from the treatment, the treatment’s safety, the importance of offering it to patients and the possible effect it could have on clinical practice. The results showed similarly higher mean ratings for the stories with spin than for those without spin.

So how many stories actually contain the kind of spin this study looked at? Probably the majority: The authors referenced a study of Google Health News stories in which 88% of stories about medical studies had at least some type of spin, such as misleading reporting or interpretation, omitting adverse events, suggesting animal study results apply to humans or claiming causation in studies that only reported associations.

While the authors note that “spin in news stories is often related to the presence of spin in the published article and its press release,” that’s no excuse for irresponsible or incomplete reporting. One way to check our own reporting for spin — which might inadvertently slip into our articles — is using the HealthNewsReview.org criteria to assess our stories. Whether we use this criteria or some other approach, we can’t ignore the reality that the way we cover a study has impact — potentially both positive and negative — and that means we have a responsibility get it right.

Tara Haelle

Tara Haelle is AHCJ’s health beat leader on infectious disease and formerly led the medical studies health beat. She’s the author of “Vaccination Investigation” and “The Informed Parent.”