Sometimes all we need is a quick suggestion from our peers to zero in on a good story. Here we turn to front-line journalists for advice, some simple insight to add to our repository of “shared wisdom.”
"What should journalists particularly pay attention to or ask about when covering a medical research study related to nutritional supplements (vitamins, minerals and other supplements)?"
First and foremost, look at funding sources for any research on dietary supplements (or food additives, or even foods or food groups, for that matter). Look out for funding from supplement companies, industry groups and nonprofits who may be biased towards one outcome or the other. These conflicts of interest don't mean the study is worthless, but they should heighten your skepticism when evaluating the study, and you should ask study authors and your outside experts about them.
Beyond that, pay close attention to the supplement dose used and put it in context for your readers. If the supplement is a vitamin or mineral, compare the dose to the Dietary Reference Intake (DRI) values. How does the dose compare to what you might find in a balanced diet? If it's way above that, it's a pharmacological dose, and that's worth highlighting for your readers. The NIH Office of Dietary Supplements is an amazing resource!
As a physician source, what do you find to be most helpful as a journalist interviews you?
I am happiest when a young reporter is honest with me and says, "I don’t really understand this issue." We both have same goal of getting good information out, so let me know how I can help you.
I try to provide written material (one page or less) on the topic, websites with helpful information that patients/families can use, where a reporter can link to in the story, and when appropriate, families that can speak to the issue.
Elizabeth Murray, D.O., M.B.A., is board-certified in pediatrics and pediatric emergency medicine. She works with the media regularly and is part of People Magazine’s Health Squad. Twitter: @DocEMurray
What's the first or number one way you look for the possibility that a study involves p-hacking?
I search in the page for the word "multiple" to see what they say about how they adjusted for multiple comparisons. I hope to see a thoughtful explanation of how they adjusted and why. If they say there's a good reason why they didn't adjust, I'll ask a statistician or other outside source about it. But often there is no explanation or sometimes no mention at all – even after I read through to see if they discussed it in other terms – and that's a major red flag.
Beth Skwarecki is the health editor at Lifehacker. She lives in Pittsburgh, Pa. Follow her on Twitter at @bethskw.
What do you do when you come across an animal study?
If it's a study on a cancer drug in mice, skip it. Every other day mice are cured of cancer in a lab. I do think there are certain times when animal studies are really important though, such as monkey studies testing novel therapies or those evaluating drugs for diseases that currently have no treatment.
Also, with some diseases, like Ebola, it's near impossible and highly unethical to do challenge studies in humans. I also think certain animals might be better for studying certain diseases. For example, I am learning that dogs and humans have some of the same mutations that give rise to cancer, making dogs a much better animal model.
What should journalists consider regarding the language they use in covering medical studies to be conscientious about individuals included in the study?
Choose the words you use carefully. This seems like basic advice, but it’s key when covering medical studies. It’s important for journalists to avoid simply repeating the terms scientists use. For example, scientists may refer to study “subjects,” to “patients,” or to certain conditions as “diseases” without thinking too much about how dehumanizing that is or whether it robs people of agency. It’s important that journalists choose more respectful and inclusive alternatives, evolving their language as societal definitions change.
At Spectrum, we write about autism, and we struggle all the time with these questions. Following the lead of some advocacy groups, we recently made the decision to call autism a “condition” instead of a “disorder,” for example, and we have always referred to “participants” or “people.” Some people still say our language is too medicalized, but we try to be aware of our choices and revisit our style guide often. The Associated Press offers other guidelines that might be useful.
Apoorva Mandavilli (@apoorva_nyc) is founding editor and editor-in-chief of Spectrum. She is also adjunct professor of journalism at New York University. You can read her writing here.
How and/or when do you decide that a medical study you were planning to cover actually shouldn’t be covered?
Usually I look at a few things: Some abstracts don’t list the total number of patients in a study and then you look at the full text and there are only six participants. There was one study that garnered a ton of attention in HIV circles that was based on the experiences of just two or three participants. I chuck those. And then I also look at who funded the study. Usually at the end, there will be a section on disclosures. If a pharmaceutical company funded a glowing study, it doesn’t mean I won’t cover it, but I think about what it adds to the conversation and am sure to mention funding. In some cases, I’ve avoided summarizing yet another positive study funded by the same pharma company over and over again. I need to ask if this is adding anything to the conversation.
What kind of biostatistical pitfalls should reporters watch out for when reporting on medical research?
Don't conflate odds ratios (OR) and hazard ratios/relative risk (HR/RR). Use real numbers. Explain that 2 out of every 10,000 people will have a condition or adverse event instead of saying that the condition or event is X% more likely. If something is 50% more likely to happen with Thing Y, but it only happens in 4 out of 100,000 people when Thing Y isn’t involved, then the increased risk actually translates to just 2 additional people out of 100,000 (a total of 6 people out of 100,000). Watch out for p-hacking as well. If you see what looks like a bunch of statistical fishing expeditions in the paper and want someone to tell you if that's what you're seeing, ask a stats person you trust to read over the analysis or double-check what you found.
What is the most important ethical guideline to keep in mind when covering medical research?
All journalists should be accurate and truthful, but I think it’s especially important that health journalists internalize the idea of minimizing harm. We are tasked with reporting the most personal details of people’s lives, and on topics that can immediately impact their wellbeing. Before publishing or broadcasting a health story, journalists need to think beyond its immediate impact and consider how people will be affected for years to come through online archives. Journalists should be especially cautious about how those featured will be impacted by the story, including individuals, families and groups. Health stories have the ability to empower people, but they also have the power to stigmatize. The bottom line is to think of harm holistically.
Andrew M. Seaman (@andrewmseaman) is the senior medical journalist with Reuters Health in New York City. He is also the chair of the ethics committee for the Society of Professional Journalists, which revised its decades-old Code of Ethics in 2014. AHCJ embraces the SPJ Code of Ethics in its statement of principles.
What two critical details do you consider in looking at PR-hyped animal studies?
My two key points would be:
Sample size is key. If the study shows 90 percent efficacy but only had a sample size of four animals, the press release will call it revolutionary and groundbreaking... but a journalist shouldn't.
Also, what is the study animal? Is the study on mice? Mice aren't human, and many disease models aren't natural diseases of mice or don't behave similarly in other species.
Elizabeth Devitt is a freelance science journalist in her second career after being a veterinarian. She writes about the environment, animals, medicine and everything that connects animals to humans. Her work has appeared in National Geographic News, ScienceNOW, Nature Medicine, Cancer Discovery, San Jose Mercury News and the Bay Area Monitor, among others. Check out her website or follow her on Twitter at @elizdevitt.
How do you determine whether to cover a study or not?
Choose whether or not to cover a study not only based on its findings, but on where it was published, who it was funded by, and the general quality of its methods. Read the actual study. Simplify for audiences when necessary, such as simplifying mechanisms with analogies, but please do not simplify the conclusions to raise newsworthiness. Take pride in reporting small findings within the context that science continually evolves … a medical story can be meaningful and interesting without needing to be groundbreaking. Be humble.
Hanna Saltzman (@hannasaltzman) is a health journalist, researcher and organizer. She currently works as a research analyst at the University of Utah School of Medicine and is writing a book that aims to bring basic physiology concepts to a mainstream public.
How do you find an email address for a researcher who does not have it posted?
Sometimes I want to speak with a specific researcher, but they do not have their email address posted on their institution’s bio page (sometimes because they are also a practicing clinician who may not want to encourage patient emails), and I may not have time to track down the PIO if it’s not someone I know. I’ve found the best way to find these emails is PubMed. Most of these researchers have been a corresponding author on at least one paper, and a search of their name lets me check under the author list to see if they are the corresponding author on any papers they’ve authored. So far, this method has never failed me.
Tara Haelle (@TaraHaelle) is AHCJ's medical studies core topic leader. She is a freelance journalist and multimedia photographer who has particularly focused on medical studies over the past five years. She specializes in reporting on vaccines, pediatrics, maternal health, obesity, nutrition and mental health.
What makes a good anecdote in a health story?
If readers see themselves, or someone they love, in the person’s story, that’s a good anecdote. Reporters need to look for characters, not just quotes. A good anecdote dramatizes a situation rather than simply describing it, but it also illustrates the larger story while conforming to — not contradicting — the evidence. Inappropriate anecdotes are those that are not part of any trend and which are unsupported by the evidence or outright contradict the evidence base, such as Jenny McCarthy’s use of her son Evan to suggest that vaccines cause autism, a “poster child” for using an anecdote irresponsibly because it goes against the evidence.
Liz Szabo has covered medical news for USA Today since 2004. Her work has won awards from the Campaign for Public Health Foundation, the American Urological Association and the American College of Emergency Physicians. Szabo worked for the Virginian-Pilot for seven years, covering medicine, religion and local news.
What kinds of misunderstandings can contribute to distrust between journalists and researchers?
Research is cautious and leaves room for new information or even for being wrong; it moves along incrementally with small advances. The media tends to want big definitive statements and jaw-dropping breakthroughs. That gap is difficult to bridge and leads to an avalanche of misreported findings, which then makes researchers loathe to talk to journalists. If a journalist can put the findings in context without exaggeration and make connections to everyday life or human culture, it's more interesting AND accurate.
Molly Gregas’ broad interests in science and communication stem from growing up in a family of writers, teachers and academics. She earned her PhD in biomedical engineering from Duke University and spent several years in research before immersing herself in a variety of science-related communication, education and outreach initiatives. She works as a writer, editor and research communication specialist and is based in Toronto, Ontario.
What part of a study may be overlooked — but shouldn’t be — by journalists?
Never ignore the section on the limitations of the study. Always read the whole study. There's often a lot of interesting information packed into the methods section, etc.
Elizabeth DeVita-Raeburn writes primarily about medicine, science and psychology. Her new book, The Death of Cancer, will be published by FSG in November 2015. Follow her on Twitter at @devitaraeburn
What’s your advice for a brand new reporter to covering medical studies that veterans may take for granted?
This might go without saying for most of us, but I think is worth repeating for people new to the beat: Get your hands on the actual study and call up the researcher; don't just read the press release. Press releases sometimes exaggerate or suggest news hooks that don't really represent the research. Also, balance your story by interviewing an expert who wasn't involved in the study.
Tracy Miller (@MillerTracyL) has reported on health and medicine as a senior digital editor for Prevention magazine and the New York Daily News.
What is the most important point for reporters to convey in covering observational/epidemiological studies?
Correlation is not causation. Repeat. Keep repeating. I see too many reports that say that two things are correlating, and therefore one causes the other. This is usually not the case.
Amy Vidrine has been a research scientist for over 10 years, in microbiology, molecular biology, and biochemistry. She has recently started writing fiction and can be found on Twitter.
What should reporters keep in mind when reporting on the findings of just one new study?
Single findings should be viewed in the context of the bigger picture of all other findings on the subject. Findings frequently contradict each other. Differences may be because of study methods or sample size/demographics, or because of flaws in either study.
I always try to find recent review articles that can accurately describe that bigger picture. If it's confusing or radical, ask the researcher or another expert — they can also provide lay-worded context and scale to the finding.
Olivia Campbell (@liviecampbell) is a freelance journalist whose writings on medicine and mothering have appeared in Pacific Standard Magazine, Brain, Child Magazine and The Daily Beast.
What are the journalistic red flags with epidemiology statistics?
Journalists should be very careful with epidemiology statistics – in particular, prevalence.
To use one very controversial example, the prevalence of autism spectrum disorders has increased from 1 in 150 children a decade ago to 1 in 88 now, according to the CDC. That statistic doesn't tell us whether the condition is more common than it was a decade ago, only that it is more frequently diagnosed. (Which may be the result of better screening and an expanding definition of ASD, not higher incidence.)
Alex Wayne (@aawayne) writes about health care policy for Bloomberg News.
What antenna go up when you see a press release about a study with remarkable findings?
Never trust the press release about a study. If they’re to be believed, we’ve cured cancer, Alzheimer’s and the common cold. Get the study and read it for yourself.
Markian Hawryluk is a health reporter with The Bend (Ore.) Bulletin. He spent 15 years as a health policy reporter in Washington, D.C., writing for trade publications. He has won multiple awards for his health reporting, including the Bruce Baer Award, Oregon’s top prize for investigative journalism. Last year, he was a Knight-Wallace Fellow at the University of Michigan and is a member of AHCJ’s 2013-14 class of Regional Health Journalism Fellows. He recently reported on a local clinic that decided to kick out the drug reps – and how it changed their practice.
How do you get researchers to open up during an interview?
Genevra Pittman is a medical journalist for Reuters Health in New York. She is a graduate of Swarthmore College and New York University’s Science, Health and Environmental Reporting Program. When not writing and reading about health and medicine, she runs, roots loudly for Boston sports teams and plays fetch with her cats.
When covering a study, I try to avoid asking researchers to sum up their findings for me or ask what they think is most important about the research right off the bat. Rather, I ask about specific numbers or suggest something I thought was interesting in the findings. Many researchers, as sad as it is, perk up when they realize you actually read their study and were interested in it, and aren’t just calling them based on a press release headline.
It can also be helpful to ask a source, "What is most important for patients, or their family members, to know about this?" That can get researchers out of medical jargon speak, if you're writing for a consumer audience like I usually do.
I always like to end an interview with the question, "Is there anything else you would like to highlight?" Some sources won't have anything to say, but others will rephrase an earlier point in a helpful way, or bring up a research or policy implication I hadn't thought about. Either way, they often appreciate being asked!
What advice do you give your staff about finding and reporting absolute risk in medical studies?
Note: Absolute risk is a person’s risk of developing a disease over a given time period. It’s important to report absolute risk alongside a study’s reported relative risks to keep the benefit of a test or treatment from being exaggerated in readers’ minds.
• In the best-case scenario, a study spells out what the absolute risks of a given condition were in the treatment and control group (just for example). That just requires noting it in the story.
• In a close second-best scenario, the text doesn’t note the absolute rates, but a table or figure – usually Table 2 – notes the percentages of each group, or the absolute numbers, that had the condition in question. Pick a few representative numbers and include them.
• Sometimes it is more difficult to tell. For example, when a cohort is stratified into quartiles based on a risk factor, and each group has a higher percentage of a given condition than the next, it may be necessary to give the overall number of subjects with a given condition. That’s not ideal, of course, but it at least gives some context for “25 percent greater,” etc.
• Finally, sometimes, the report does a particularly bad job of highlighting absolute risk, and does none of the above things. That probably suggests some weaknesses, but in this case, we can note that the study did not include an absolute risk, but go to a source like Medline Plus to be able to find a general population figure for the condition. Again, that at least gives some context.