A recent editorial in the Journal of the American Medical Association explored the responsibility that journals have to public health in reporting on the association – or lack thereof – between adverse events and different drugs, devices or vaccines.
Reporting on these kinds of studies is a mainstay for most regular health beat reporters: Every week a new study says that this drug may increase the risk of that condition, or that this device is no longer thought to increase the risk of some other condition.
While the editorial points out the journal’s responsibility in publishing these studies, so that doctors can discuss risks of treatment possibilities with their patients, what is a journalist’s responsibility on reporting these findings? And how do journalists avoid fatigue – and help their readers avoid fatigue – with findings that regularly contradict each other (eggs and heart disease, anyone?) or that have been reported dozens of times already but never go away (e.g., vaccines not causing autism)?
The answer to both questions is actually hinted at in the last paragraph of the JAMA editorial:
“Even though no observational study examining the relationship between an exposure and an outcome can definitely establish ‘positive’ cause-and-effect results, and no observational study can definitively prove ‘negative’ results, each study adds to the totality of evidence regarding the safety of drugs, devices and vaccines.”
That message perhaps is the most important one for journalists to understand themselves and to convey in the subtext of all the articles they write on these studies: this is how science works, and these are the limitations of that science… so stay tuned for more.
This message can be frustrating for readers because they understandably want answers now for important decisions they need to make about managing their health. But that’s not how science works, and making that clear, whether through experts’ quotes, excerpts from the studies themselves or explanatory statements, will gradually help readers understand (if not become comfortable with) the uncertainty that scientists accept as a daily reality of their profession.
While all readers should have encountered the scientific method again and again in their science education, many have forgotten it or never fully understood it in the big picture in the first place. Yet understanding the scientific method and the need to constantly replicate studies to build up the evidence base are key to avoiding confusion and frustration in assessing whether to try a certain treatment or not.
Most often, researchers use observational studies, whether prospective or retrospective or cross-sectional, to study these questions. But all epidemiological studies are subject to biases and limitations, especially when it comes to confounders that make it hard to tease out what’s causing what. The more studies that are done, the closer researchers get to a more definitive answer on whether a particular intervention increases, decreases or doesn’t affect risk of a particular condition, but that sometimes means conflicting data arises along the way.
For example, the editorial mentions a recent study reporting on no increased risk of bladder cancer with the diabetes drug pioglitazone, despite earlier studies suggesting a risk. The authors point out “additional studies are needed to confirm these results, especially considering that the small increased bladder cancer risk previously observed could not be ruled out.” In other words, again, just as one study cannot say X definitely causes Y, a single study also cannot say X definitely does not cause Y.
The journal also recently published a study finding no increased risk of autism with the MMR vaccine, a conclusion that has remained consistent for more than a decade. Even so, persistent concerns by some parents about vaccines have made continued research relevant.
For better or worse, journalists reporting on these studies are now proxies for science teachers in helping readers understand what a particular study’s findings mean, and that means constantly providing context:
- What have past studies shown?
- How strong are those studies?
- What are alternative explanations for the findings?
Answering these questions requires more than just repeating “correlation is not causation.” We know from Hill criteria that correlation can mean causation at a certain point, so readers need a journalist who understands the evidence base well enough to put it into context for them.
Traditionally, journal articles were never intended for the public – they are published for researchers’ peers – but the Information Age is one in which anyone can read these articles and journalists report on them day in and day out.
If the journal’s responsibility is to publish articles of interest to public health, it is the responsibility of journalists to help the public understand them.