If satire is a lesson, as novelist Vladimir Nabokov allegedly said, then John Oliver is among its best teachers — even, perhaps surprisingly, when it comes to assessing medical studies and their coverage in the media. If you haven’t already seen the segment I’m talking about, it’s really worth the time, both for lessons and for laughs, to watch it in full below.
During Oliver’s HBO show “Last Week Tonight,” he went on a tirade on Sunday about how poorly the media frequently portrays the studies that science is constantly producing. Continue reading
At Health Journalism 2016 in Cleveland, Andrew M. Seaman and Hilda Bastian discussed shortcuts for weighing the likelihood a study’s answer is right, making sense of shifting bodies of evidence and cutting through researcher spin. Continue reading
Almost since the inception of health journalism, reporting on medical research has been one of the mainstays of the job. That does not, however, mean it’s easy or work to be taken lightly. With dozens of potentially interesting and relevant papers coming out each week, full of statistics and findings that may or may not be “statistically significant” or “clinically significant,” covering medical studies can be daunting to a newcomer.
Enter one of the longest running workshops at the AHCJ annual conference: the Thursday morning session “Reporting on Medical Studies.” Continue reading
One of the trickiest balances a health reporter must strike is the one between anecdotes and evidence. The former is the compelling narrative that makes readers want to dive into an article because personal stories are engaging and help us relate to bigger, more abstract ideas.
Yet those anecdotes must be rooted in a body of research that supports their claims. Continue reading
Perhaps you stumble onto an intriguing study that you haven’t seen covered and want to report on it. Or you receive a press release touting provocative findings that sound pretty astonishing … if they’re true. One potential indication of the paper’s significance and quality is the journal in which it was published.
Publication in a highly regarded journal is not a guarantee in itself that the paper is good – the blog Retraction Watch has hundreds of examples of that. In fact, one of the most famously retracted studies of all time – that of Andrew Wakefield’s attempt to link autism and vaccines in a small cases series – was published in The Lancet, one of the top medical journals in the U.K. (Ironically, that study continues to contribute to The Lancet’s impact factor because it’s the second-most-cited retracted paper as ranked by Retraction Watch.) Continue reading