Journalists are in love with reporting new findings about a disease and a particular risk factor, but they are not so keen on following what happens later and reporting on whether the finding was replicated – and just over half the time is later disproved.
This comes from a recent study in PLOS ONE by authors who previously found that journalists tend to favor initial findings over subsequent findings on the same outcome. This inclination compromises the accuracy of overall reporting since so many studies are not replicated.
The alternative can confuse and frustrate readers, such as when journalists report so many studies’ findings that we end up with what I call the Coffee, Wine and Chocolate Problem (which is how Retraction Watch founders Ivan Oransky and Adam Marcus opened their Stat commentary on this study.) It’s not that too much of these great things is a bad thing. (Though that can be true too!) But flip-flopping research findings can leave readers frustrated, confused and jaded since they tend to not fully understand a fundamental aspect of science: “Scientific progress is a cumulative process that reduces uncertainty by combining uncertain initial findings with validation by subsequent studies,” as the authors write.
The authors also emphasize that studies with negative findings aren’t a problem if they are contributing to the self-corrective process of science. As the authors note, “many initial findings are refuted by subsequent studies, a trend that holds true at all three levels of biomedical research: i) preclinical studies, ii) associations between biomarkers or risk factors and diseases, and iii) clinical trials.”
Is there a solution to these competing problems – public jadedness on flip-flopping conclusions and the lack of follow-up on replication studies?
Provide context in every story you write. Tell your readers where the current consensus is up until that point on the evidence, and where it appears to be leaning. Also, continue to follow the trajectory of research on a particular topic after your initial story. Admittedly that’s easier said than done, especially if you are a general assignment reporter or have a broader health beat rather than a specialty. At the very least, take note when coming across new findings on a past story topic. The PLOS ONE study shows why a good faith effort is so important.
The authors focused on studies that linked biomarkers and risk factors with diseases, excluding preclinical studies, since they rarely are included in meta-analyses, and treatment effectiveness studies, since reporting can be biased due to industry influence.
They analyzed newspaper coverage of 5,029 studies, which included 4,722 initial studies (the first time a specific association was tested/reported in the scientific literature) and subsequent studies (those attempting to replicate the findings of the initial studies), plus 306 meta-analyses related to the initial and subsequent studies. The studies were divided into three groups: psychiatry, neurology and a group of non-mental diseases: breast cancer, glaucoma, psoriasis and rheumatoid arthritis. They used the Dow Jones Factiva database to look at which studies were covered by newspapers, excluding press agency (AP) and trade publications.
Here is what they found (yes, it’s a lot of numbers, but they’re worth digesting one at a time):
- About 3.3 percent of the single studies were covered by at least one newspaper (156 of 4,723) in 1,475 news articles, and 1.6 percent of the meta-analyses were covered.
- Newspaper coverage focused on 10 conditions: ADHD, autism, depression, schizophrenia, Alzheimer’s and Parkinson’s, multiple sclerosis, breast cancer, glaucoma and rheumatoid arthritis.
- Among the 4723 single studies, 2 percent of the psychiatry studies were covered, 2 percent of the neurology studies were covered, and 5 percent of the other diseases studies were covered.
- Studies involving lifestyle risk factors (including meta-analyses) were covered 3 to 5 times more often than studies reporting non-lifestyle risk factors, primarily genetics.
- Newspapers were far more likely to report initial findings than subsequent ones: 13.1 percent of initial studies were covered compared to only 2.4 percent of subsequent studies.
- Most of this disparity came from coverage of studies not involving lifestyle factors. Journalists covered1 percent of the initial non-lifestyle studies and 1.2 percent of the follow-up ones, compared to 12.8 percent of initial studies and 9.7 percent of subsequent ones among the lifestyle studies.
- Journalists much prefer studies with positive findings, where an association is found, and under-report negative findings. All the initial studies covered had positive findings while none with negative findings received coverage. Only 13.6 percent of follow-up studies with negative findings were reported. Of the five meta-analyses covered by newspapers, only one had negative findings—in four newspaper articles.
- Of the single studies covered, just under half were confirmed by a later meta-analysis, but only a third of the initial studies were later confirmed. In other words, two thirds of first-time findings ended up being contradicted by a later meta-analysis. Meanwhile, 56 percent of follow-up studies were later
- Journalists were most likely to cover initial findings of psychiatry studies, but these were also the least likely to be later confirmed.
- “The percentage of articles covered by newspapers strongly increased with the impact factor,” so, the higher the impact factor, the more likely the study would be covered.
- Except for 14 breast cancer studies published in journals with an impact factor between 5 and 6.5, the researchers did not find any newspaper articles covering studies published in journals with an impact factor below 6.5. (They did not check for the 2,342 non-breast cancer studies published in journals with an impact factor below 5 because none of the 220 articles from journals with impact factors between 5 and 6.5 had been covered by newspapers. No breast cancer studies in journals with an impact factor below 5 had been covered)
So, what are the big takeaways?
Journalists inadvertently can mislead readers by not following up on studies they initially report about, especially when about half of those findings end up being contradicted by subsequent research. This is particularly true with psychiatry and genetics studies. Journalists also fall into the same trap as scientific journals by reporting far, far more often on positive findings than negative findings, even though the latter are just as important.
Also, journalists do not report on meta-analyses often enough, even though these studies nearly always have more accurate conclusions than individual studies. Finally, when journalists preferentially focus on studies from high-impact journals – which, as a past pair of blog posts revealed neglects potentially good science worth reporting from less prestigious journals.
As you can see, there’s a lot we can do as journalists to step up our game in reporting on medical research.