Every journalist covering medical and other types of scientific research should read this thought-provoking open-access article recently published in PNAS: “Crisis or self-correction: Rethinking media narrative about the well-being of science.”
This piece by Kathleen Hall Jamieson of the Annenberg School for Communication at the University of Pennsylvania is one of the best articles I’ve read about how to think about the big picture in our coverage of medicine and science and the public perception of media narratives about science. It’s one of those rare, important writings whose entire purpose is to examine the nuance that’s missing – yet essential – in the majority of science and medicine coverage.
It’s impossible to summarize the whole thing, so I’ll hit a few highlights and hope you’ll click through and read the entire piece. Jamieson begins by describing the three most common narratives found in the media about science and medicine:
- The first is the scientific quest and discovery, which draws on the long, rich history of quest narratives, dating back to oral traditions, Gilgamesh, The Odyssey, Lord of the Rings and so forth. Any fan of Star Wars is familiar with the work of Joseph Campbell, who is probably most famous for discussing these kinds of archetypal narratives in human history. Scientific discovery — and even the description of the scientific method/process itself, as Jamieson describes – is a natural fit for this narrative and hence the most common in the media.
- The foil to the above is what Jamieson calls the “counterfeit quest discovery.” In this case, the scientists with the discovery is the villain as others – journalists, researchers, some kid in their garage, whatever – discover their malfeasance. They’re a fraud, their discoveries are frauds, and the media focus is on their fall from grace. (She cites the Anil Potti and Haruko Obokata cases as examples, and I’d add Andrew Wakefield to the list.) The advantage of this narrative is that it at least focuses the blame on a single bad actor, unlike the third narrative type.
- The third can be the most dangerous as far as rocking the public’s trust in science: the systemic “science is broken” narratives, which Jamieson discusses quite a bit since it has risen to prominence particularly over the past decade. It’s also the one with the most irony since the examples provided often depict why science is NOT broken. They demonstrate transparency, self-correction, the lack of replication that shows scientists are trying to replicate results, scientists uncovering frauds. (By the way, Christie Ashwanden’s “Science Isn’t Broken” is a fantastic antidote to all this.)
That we find out about such problem cases – and how they are leading to changes and fixes – shows how robust science is. But that’s not always the message the public walks away with, largely because of the headlines on these stories. Jamieson provides examples of such headlines:
- “John Ioannidis has dedicated his life to quantifying how science is broken”
- “Science journals screw up hundreds of times a year. This guy keeps track of every mistake”
- “Psychology’s replication crisis can’t be wished away”
- “The replicability crisis in science”
- “Big science is broken”
- “Cancer research is broken: There’s a replication crisis in biomedicine – and no one even knows how deep it runs”
- “Unreliable research: Trouble at the lab. Scientists like to think of science as self-correcting. To an alarming degree, it is not.”
Jamieson mentions those often cited in these articles – John Ioannidis, M.D., Brian Nosek, Ph.D., C. Glenn Begley and AHCJ President Ivan Oransky, M.D. – even though in reality many of these individuals are offering solutions and context, essential pieces often missing from these Chicken Little articles.
“Because those whose work is prominently cited to certify that science is broken are spearheading efforts to solve identified problems, their work is evidence of the resilience of science,” Jamieson writes. Yet that focus ends up “buried in a problem-focused news narrative,” with scientists allegedly “publicizing problems, not proffering solutions.”
She presents data – analysis of recent news articles – to back up her points and acknowledges that many of these (up to a third in one analysis) are written by scientists themselves.
Journalists can be part of conveying a more balanced, nuanced, accurate view of science by following four principles she provides: “(i) supplement the quest discovery, counterfeit quest, and systemic problem narratives with content that reflects the practice and protections of science; (ii) treat self-correction as a predicate, not an afterthought; (iii) link indictment to aspiration; and (iv) focus on problems without shortchanging solutions and in the process hold the science community accountable for protecting the integrity of science.”
Jamieson goes into detail on each, and it’s worth a journalist’s time to read the piece (also bookmark it or maybe even print to reread). Whether journalists signed up for it or not, we are a vital part of how the public perceives science. We have to take our power to influence as seriously and responsibility as our duty to report the truth.