If satire is a lesson, as novelist Vladimir Nabokov allegedly said, then John Oliver is among its best teachers — even, perhaps surprisingly, when it comes to assessing medical studies and their coverage in the media. If you haven’t already seen the segment I’m talking about, it’s really worth the time, both for lessons and for laughs, to watch it in full below.
During Oliver’s HBO show “Last Week Tonight,” he went on a tirade on Sunday about how poorly the media frequently portrays the studies that science is constantly producing. With the help of HealthNewsReview.org’s Gary Schwitzer, the show’s producers identified a couple of doozies in particular that revealed just how much a study’s real findings can get warped, just like a “game of telephone,” Oliver said, where “the substance gets distorted at every step.” Among those was the news that “smelling farts reduces cancer” even though the study in question didn’t even mention flatulence or its ability to influence cancerous cells — a case study Schwitzer noted during his workshop at Health Journalism 2016 as well.
An opening montage of ridiculous claims also concluded with a study supposedly showing that a glass of red wine was as good for you as an hour at the gym (it’s not, as I’ve deconstructed before). Speaking of favorite libations, Oliver picked on the single substance that probably shows up more than any other in these misconstrued studies — coffee.
“Coffee today is like God in the Old Testament: it will either save you or kill you depending on how much you believe in its magical powers,” Oliver wisecracked.
But amidst his jokes, Oliver offered truly valuable advice for news consumers and for reporters. After pointing out that “not all scientific studies are equal,” Oliver explained, “Some may appear in less-than-legitimate scientific journals, and others may be subtly biased because of scientists feeling pressure to come up with eye-catching positive results.”
He went on to talk about the ways study design, data or results can be manipulated: “You could alter how long [the study] lasts, or make your random sample too small to be reliable or engage in something that scientists call P-hacking,” a phenomenon Christie Aschwanden brilliantly explained in a piece we highlighted last summer. Oliver’s explanation was a bit more blunt: “It basically means collecting lots of variables and then playing with your data until you find something that counts as statistically significant but is probably meaningless.”
But he also gave a nod to Aschwanden in discussing her excellent FiveThirtyEight piece on why nutrition research is so frustratingly unreliable. Associations can be found between tomatoes and being Jewish or between eating cabbage and having an innie belly button, for example. And the problem isn’t limited to poorly conducted studies, Oliver noted. “Even the best designed studies can get flukish results,” which is why replication — as unsexy as it is — is so important in scientific research. It provides the context that it’s so essential journalists include when reporting on a single study or overall research topic.
In discussing another case study related to a study on cocoa flavanols and pregnancy, Oliver reinforced the point Ivan Oransky, M.D., frequently makes: relying only on a press release to report a study’s findings is journalistic malpractice. “Too often, a small study with nuanced tentative findings gets blown out of all proportion when presented to us the lay public,” Oliver said, pointing out that the chocolate study involved just 20 women.
Animal studies also fell under Oliver’s scrutiny, as they well should and as AHCJ presenters at various workshops have likewise warned against. “The overwhelming majority of treatments that work in lab mice do not end up working in humans,” Oliver said. He also brought up the potential conflicts of interest in studies funded by industry, such as a Coca Cola-funded study on the dangers of driving while dehydrated.
But even then, Oliver himself included the very nuance he’s asking of journalists: “Just because a study is industry-funded or its sample size is small or it was done on mice doesn’t mean it’s automatically flawed, but it is something the media reporting on it should probably tell you about,” he said. Indeed we should.
And what’s the harm of not doing that? “If I were tell you about each of these studies in isolation, at some point you might reasonably think, ‘Well no one knows anything about what causes cancer,’” Oliver said about a forest plot showing how wine, tomatoes, tea, milk, eggs, corn, coffee, butter and beef all supposedly increase and decrease cancer risk. “That is a problem. That’s the sort of thing that enabled tobacco companies for years to insist the science isn’t in yet.”
And then he drove home why cherry picking the science we like is so dangerous: “If we start thinking science is a la carte, that’s what leads people to think manmade climate change isn’t real or that vaccines cause autism.”
Two major contributors to that kind of thinking are a lack of context in health reporting and an unhealthy dose of false balance in reporting. That’s why reporters absolutely must include context in reporting on medical research — or else not report on it at all, if you follow Oliver’s advice. And it’s not bad advice. Misleading research presented in isolation can cause more harm than ignorance of that particular research altogether. It’s not exciting, but the reality, as an actor in a final amusing segment states, is that “science is a very slow and rigorous process that does not lend itself to sweeping conclusions.” Journalists should therefore avoid those conclusions themselves.
(Editor’s note: For more resources on accurately reporting on medical studies, please see the materials in our Medical Studies Core Topic.)