How do you decide when a health policy study is worth writing about and how do you approach it?
I want to know if it’s something that’s a trend and whether it works. We hear a lot about social determinants of health right now, and I want to know if there’s a study showing something is working that’s been out there. I’m less interested in what we already know, such as Medicaid work requirements causing people not to receive care.
My wise old editors back when I was a young reporter at the De Moines Register and Chicago Tribune would say, “What’s sexy about it?” I want to know, in real practice, is it working? Is some of the value-based care working? Also, really look at the dates [of the data] because, especially in health insurance, they’re talking about real time data — they know right away when you’re getting your prescription, whether you’re taking your medications, etc. Anything more than a couple years old is usually worthless. I’m looking for information as fresh as possible.
Bruce Japsen is a Forbes senior contributor who has covered health care for three decades and teaches in the University of Iowa School of Journalism MA in Strategic Communication program. He occasionally appears on Fox Business News and on WBBM News Radio 780 and 105.9 FM. Follow him at @BruceJapsen
What do you look for in a study to determine if it’s good enough to cover?
I look up the clinical trial in ClinicalTrials.gov and check to see if they changed or added endpoints — and I make sure to report what they were looking for (whether they found it or not), not just the incidental finding that made for a better headline. And more generally, does the outcome make sense for the question they're trying to ask? For example, triggering a blood sugar fluctuation is not the same thing as preventing diabetes or causing weight loss. If they're analyzing a ton of data, I make sure they say in the methods what they're doing to correct for multiple tests — because if they're not, the whole paper could be garbage. I also check out key premises mentioned in the introduction (such as why this topic is worthy of study or some basic fact they're assuming is proven). Sometimes it's a rabbit hole of citations ending with one author citing his own opinion.
Beth Skwarecki (@BethSkw) is a freelance health and science writer who has covered studies for Lifehacker, Medscape, ScienceNow, and Scientific American.
How do you judge when to report on press releases about developing drugs?
Companies may need to issue news releases (who has a “press” anymore?) to satisfy regulatory requirements to keep investors and the markets informed. Whether we report on the releases or not, they have done what is required. So, as Tara mentioned, we do not have to report on every tidbit, and when we do choose to report on something, we need to keep it in proper perspective as to the source, the stage of development, efficacy vs adverse effects, any financial relationships that commenters may have, and the context vis-a-vis other similar or competing goings-on in the field. Also, remember that entities pitching stories offer up only happy, live patients. They generally don’t march out the dead ones.
Dan Keller specializes in writing and reporting about basic and clinical medical research and related biotechnology for health care professionals, patients, and the public. He holds a doctorate in immunology and microbiology and has additional research experience in hematology and neuroendocrinology.
What best practices are you following in covering preprints during the pandemic?
I've always sought independent comment on the preprint studies that I've considered covering, but I feel that this pandemic has made me much more vigilant about ensuring the quality of preprints that I report on. These studies that have not undergone a formal vetting by other scientists for the quality of their design and results: so, for starters, asking other researchers to comment on the quality of the methodology of the paper can be very informative. Context is very important when mentioning preprints in stories. I try to give readers a sense of what other papers in the same field have found and whether those conclusions are in line with the preprint findings.
During this pandemic, there’s been an even bigger push to get stories out fast, and some coronavirus sources have been more difficult to reach because they have been so busy — I’ve had to work harder as a reporter because of this. But it’s important not to get swept up in a preprint and run with it without vetting it properly. We shouldn’t let journalistic standards become another victim of this pandemic.
Roxanne Khamsi is a science writer based in Montreal, Canada. You can follow her on Twitter at @rkhamsi.
How do you distinguish between scientific papers that have been peer reviewed and those that haven’t to readers, many of whom may not really understand the distinction?
It's important for scientists to be clear on those types of caveats, and for reporters to include them in their stories. Pre-prints are a double edged sword, though I think on balance one edge is sharper than the other. It can take weeks or months for a journal to publish a paper, especially at a time like this when they are flooded with submissions.
I can think of one important paper, on viral shedding of Covid-19 patients, that was on a pre-print server about a month before it was published. Before pre-prints, the data papers in the pipeline contained was hidden from view. The world needs access to that data now; publishing it in pre-print form is helping people understand the transmission dynamics and risks of this virus . But there have been bad papers published as pre-prints that have been major distractions. One at least was withdrawn, but it's impact remains.
I think the benefits outweigh the risks still, but there are definitely downsides to the wide scale reporting on pre-prints in this pandemic. Reporters really need to make clear these aren't yet peer-reviewed results.
Helen Branswell is a senior writer covering infectious diseases and global health for Stat. She can be found on Twitter at @Branswell.
As a journalist, how do you communicate changing understandings, predictions and recommendations without undermining your credibility?
This is one of the hardest parts about covering science as a journalist — explaining again and again that it's an iterative, self-correcting process, and that even when everybody does everything right, some findings won't hold up over time (not to mention fraud, p-hacking, file drawer effect, etc.).
I think it can be respectful to readers and can defuse some frustration and confusion if we say more explicitly: what researchers do know today, why that's different from what they thought yesterday, and what they still don't know and hope to find out tomorrow.
Laura Helmuth is the new editor-in-chief at Scientific American after leaving her position as health, science & environment editor at The Washington Post. She previously edited at National Geographic, Slate, Smithsonian and Science and was President of the National Association of Science Writers from 2016 to 2018. Follow her @laurahelmuth.
How are you finding diverse sources and avoiding using the same people again and again for your coronavirus/COVID-19 stories?
Most of my stories are different enough, so I don't run into re-using experts. One trick I've used is, if I interview a source for a particular story, I'll reach out to them again for a separate story and see if there are other folks they would recommend me to talk to. They're always paying it forward, in some way. And I always make it clear that I'd love to speak with women or people of color.
Wudan Yan is an independent journalist in Seattle who has been covering coronavirus for Huffington Post, MIT Tech Review, The New York Times, Science and more. Follow her @wudanyan and see her coronavirus reporting here.
Why is it important to be sure the person you use as a source has the knowledge, experience, training, etc. in the specific topic area you're writing about as opposed to a generalist?
When it comes to ensuring someone has expertise specifically in the area I need, it depends on the topic how I assess whether they have the knowledge and experience necessary. For example, almost any epidemiologist could comment on some things, such as the basics of study design or general types of bias, that run through the general epidemiology curriculum. But we have our own niches and specialties just like any other field. Asking someone to comment on an area that’s outside of their particular niche runs the risk of interviewing someone with only a superficial understanding of the topic.
They might still be able to address it, but they’ll likely lack the depth to be able to put new findings into context or discuss the history of a particular area and how any new information changes the field. They might not know how well a new publication or research finding is accepted by others in the field, or whether it’s controversial and contradicts other published literature. They might not know if the group or person doing the research is reputable or has a history of poor studies or paper retractions.
You’re just opening yourself up to unforced errors if you choose an interviewee without solid knowledge of the niche you’re writing about.
Tara C. Smith is a professor of public health specializing in epidemiology and infectious diseases at Kent State University. She’s a columnist for Self.com and writes freelance articles for a wide range of other news publications. Follow her at @aetiology.
When a fast-moving, high-profile public health story is unfolding, what do you do to ensure the experts you interview are appropriately qualified for the topic?
In general terms, when I am looking for story sources, I go to PubMed and try as many keywords as I can think of to see what pops up. Then I look for how frequent, how recent, and who the co-authors are — are they names I recognize? I will also look at their faculty pages.
It’s important to take some time to do this, even in a fast-breaking story. Hypothetical example: If the CDC comes out with startling news about a “vectorborne” disease, you had better do enough of a read on your results to separate the mosquito people from the tick people, and the human disease people from the animal people, or you will waste a lot of time emailing.
I also Google to see whether those people have been interviewed, and also, whether they have been interviewed too much — I don’t want to be their 1000th interview, and I don’t want to be copy/pasting what they have said elsewhere.
As a reporter who specializes in emerging infections and outbreaks, I feel a special responsibility to avoid showcasing inflammatory language — it’s click-attracting but I think it is harmful to our mission of informing the public. (Sorry, traffic gods.) So when I Google to see whether and how much possible sources have spoken, I am also looking for the quality of their expression. On a spectrum of not-descriptive to OMG, I try to pick people who land in the middle.
Maryn McKenna is a freelance journalist who covers public health, global health and food policy. She is @marynmck on Twitter.
How do you find patients to report on how the research affects people?
Find real people to illustrate the real-life impact of cancer research. You can try patient groups, but most no longer comment on drug prices, perhaps because they now get so much of their funding from the pharmaceutical industry. Doctors and hospital also receive industry support and may not comment on drug prices, either. Check out this helpful (but small) list of patient groups that don't take industry funding.
Check out patient and consumer forums on Twitter, Facebook and other social media. Establish a presence in these communities a few weeks or months before you begin asking questions, so you can understand how they work. If you can't find a patient forum that fits your needs, create your own.
We created a Facebook discussion group after our story debuted, and comments helped to fuel additional stories. While it's important to have data for a story like this, it's also helpful to tell your story through main characters who can illustrate the policies and the emotional and physical side effects of the cancers you're describing.
A groundbreaking series by reporter Liz Szabo of Kaiser Health News found that many cancer drugs, most in fact, are abject failures. They are overhyped, overmarketed, and fraudulently advertised. They cost so much that patients often quit taking their medicines, foregoing treatment until death.
What's a simple rule of thumb for deciding whether to cover the introduction of a legislative bill related to health/medicine or medical research?
Here’s a good science analogy. Covering a bill's introduction is like covering results of a Phase I or II trial.
It's sometimes worthwhile depending on the audience and product, etc., but you need to be careful how you write the story so as not to paint the picture that a law is close at hand when it’s more likely to fail.
Sarah Karlin-Smith is a health care reporter at Politico, specializing in the policy and politics that affect the pharmaceutical industry and patients needing medicine.
What are some tips that make coverage of a medical conference easier?
1) Read abstracts, and make sure they’re the ones relevant to your coverage. Having general questions can help, but in my experience, researchers (especially doctors) REALLY appreciate it when they talk to journalists who actually know their stuff, ask informed questions and don’t have to get them to “dumb down” what they say.
2) Go to poster sessions. These are great if you need a few “on the street” comments, and the researchers are usually eager to talk about their work.
3) In oral sessions, sit as close to the podium as possible. This way you can rush the stage before anyone else.
4) Schedule sit-down meetings physically close to each other and to where you and/or the interviewee needs to be. This goes a long way to minimizing travel time. If possible, schedule meetings about 10-15 minutes apart. That way, you have a little bit of wiggle room, and that’s enough time to walk fast at most convention centers. For example, it’s about a 10-15 minute walk between the press room and the exhibit hall at ASCO, held at McCormick Place in Chicago, North America’s largest convention center.
5) Wear comfortable shoes.
Alaric DeArment is a senior reporter covering biopharma at MedCity News and has covered the industry and health care for more than 10 years. His Twitter handle is @biotechvisigoth.
What is the risk of using words such as 'may' or 'might' in a headline about a medical study?
The fundamental problem with using words such as "may" or "might" in a headline is that it conveys very little actual information for readers and has the potential to mislead them. For example, if a headline says that "Therapy X May Be The Solution To Health Problem Z," it could just as easily say "Therapy X May Not Be The Solution To Health Problem Z." In other words, it's not giving readers much actual information. (Can you imagine a headline that reads "Therapy X May Or May Not Be The Solution To Health Problem Z"? I can't either.) What's more, readers could easily read more into the language than you (or your copy editor) intended – translating "Therapy X May Be The Solution" into "Therapy X Is The Solution." This is particularly true for readers who are experiencing the relevant medical problem or have loved ones who are.
So, what's a headline writer to do? Try to be as specific as possible, based on the details of the relevant research. For example, "Study Finds Therapy X Reduced Symptoms For Some Health Problem Z Patients." Your mileage may vary, depending on the details of the study (and the amount of headline space you have to work with), but it's certainly worth thinking carefully before incorporating "might," "may" or "could" into your headlines.
Matt Shipman is the Research Communications Lead at North Carolina State University and author of the Handbook For Science Public Information Officers. He is a former contributor to HealthNewsReview.org.
What is one of the most valuable metrics in medical research that health journalists should pay attention to?
Medical studies frequently use relative risk to express the difference in outcome rates between an intervention and control group. A 5 percent event rate in the control group compared to a 4 percent event rate in the intervention group would lead to a relative risk reduction of 20 percent — pretty impressive! But the absolute risk reduction in this case (1 percent) may be more informative. The biggest secret in medicine, in my opinion, is that for most interventions, these absolute risk reductions are quite small. That 1 percent absolute risk reduction above? That means you'd need to treat 100 people to avoid one bad outcome. Or, put another way, you'll treat 99 people unnecessarily. The catch is that it is really hard to figure out who the one special patient will be — so we end up treating everyone. The number needed to treat (NNT) brings this all into perspective and allows us to make informed decisions: Am I willing to take on the risks of a new medication (be it in terms of dollar costs or side-effects) for a small chance of a large benefit? That's a key discussion for each patient to have with his or her doctor.
F. Perry Wilson, M.D., M.S.C.E., is an assistant professor of medicine in the Section of Nephrology at Yale University School of Medicine’s Program of Applied Translational Research. Check out his YouTube channel on reporting medical research and follow him on Twitter at @methodsmanmd.
What is most important for journalist to keep in mind when covering a nutrition health story?
The most important thing for journalists to remember when covering a health story is that their coverage also influences broader societal scientific literacy.
If journalists cover a medical study where the study involves in-vitro models, animal models, or surrogate endpoints, and consequently there is no clear clinical relevancy or outcomes, if not done exceedingly cautiously and with real nuance, they provide oxygen to the fire of medical quackery whose lifeblood is research that suggests a remote possibility of benefit, but which is sold as life-changing.
Yoni Freedhoff, M.D., (@YoniFreedhoff) is an assistant professor of family medicine at the University of Ottawa and the founder and medical director of the Bariatric Medical Institute.
What's the most important thing for journalists to look for when covering medical studies related to dementia?
There are reams of studies purporting to link dementia risk with myriad factors, like eating (or not eating) certain foods or even how many children a woman has. As we all should know, association is not causation. Just because two events both occur in a given time frame doesn’t mean they’re necessarily connected. The literature is filled with poorly controlled clinical trials, questionable data and results taken out of context.
It’s very difficult to evaluate dementia patients accurately because there’s no simple, inexpensive assessment, like a blood test, available yet. PET scans can pinpoint buildup of plaque in the brain, but they’re expensive and usually not covered by insurance. Scientists are working on pinpointing genetic biomarkers, similar to how some cancer patients are screened, but aren’t there yet.
As journalists, we have a responsibility to keep asking hard questions about studies that link external factors with this disease. Make sure the evidence is solid; that the study was large enough to be statistically significant, and that subjects were appropriately screened, e.g., with brain scans, which provides a quantifiable measure of change. Cognitive screening tests are much more subjective and while important, can’t offer the same type of hard data. Journalists not only need to understand the different evaluations available for those with cognitive impairment but to also understand that dementia is not one disease, and manifests differently in every individual.
Liz Seegert (@lseegert) is AHCJ's topic leader on aging and an independent journalist whose reporting and writing background spans more than 25 years in print, broadcast and digital media.
What is the most important thing that health journalists can do to improve their reporting of medical research?
I would probably suggest stop reporting association studies for nutrients and foods. Have a moratorium. I think that they can ask for new approaches and what else is done that is different compared to what has been done all along. If we can give some breathing air, we can see some new approaches. The truth may be intangible, but at least we will not be misled.
I think we need to take a step back and not make assumptions that we have managed to measure everything. Respect the complexity and try to dissect that complexity and see if it is dissectible. We should probably avoid making recommendations and telling people eat this and that and not eat something else; it’s just premature.
John Ioannidis, M.D., D.Sc. is a professor of medicine and health research and policy at Stanford University School of Medicine in California.
What should journalists particularly pay attention to or ask about when covering a medical research study related to nutritional supplements (vitamins, minerals and other supplements)?
First and foremost, look at funding sources for any research on dietary supplements (or food additives, or even foods or food groups, for that matter). Look out for funding from supplement companies, industry groups and nonprofits who may be biased towards one outcome or the other. These conflicts of interest don't mean the study is worthless, but they should heighten your skepticism when evaluating the study, and you should ask study authors and your outside experts about them.
Beyond that, pay close attention to the supplement dose used and put it in context for your readers. If the supplement is a vitamin or mineral, compare the dose to the Dietary Reference Intake (DRI) values. How does the dose compare to what you might find in a balanced diet? If it's way above that, it's a pharmacological dose, and that's worth highlighting for your readers. The NIH Office of Dietary Supplements is an amazing resource!
Alice Callahan (@scienceofmom) has a doctorate in nutrition, helpful background for her work as a freelance health reporter and as the author of "The Science of Mom: A Research-Based Guide to Your Baby's First Year."
As a physician source, what do you find to be most helpful as a journalist interviews you?
I am happiest when a young reporter is honest with me and says, "I don’t really understand this issue." We both have same goal of getting good information out, so let me know how I can help you.
I try to provide written material (one page or less) on the topic, websites with helpful information that patients/families can use, where a reporter can link to in the story, and when appropriate, families that can speak to the issue.
Elizabeth Murray, D.O., M.B.A., is board-certified in pediatrics and pediatric emergency medicine. She works with the media regularly and is part of People Magazine’s Health Squad. Twitter: @DocEMurray
What's the first or number one way you look for the possibility that a study involves p-hacking?
I search in the page for the word "multiple" to see what they say about how they adjusted for multiple comparisons. I hope to see a thoughtful explanation of how they adjusted and why. If they say there's a good reason why they didn't adjust, I'll ask a statistician or other outside source about it. But often there is no explanation or sometimes no mention at all – even after I read through to see if they discussed it in other terms – and that's a major red flag.
Beth Skwarecki is the health editor at Lifehacker. She lives in Pittsburgh, Pa. Follow her on Twitter at @bethskw.
What do you do when you come across an animal study?
If it's a study on a cancer drug in mice, skip it. Every other day mice are cured of cancer in a lab. I do think there are certain times when animal studies are really important though, such as monkey studies testing novel therapies or those evaluating drugs for diseases that currently have no treatment.
Also, with some diseases, like Ebola, it's near impossible and highly unethical to do challenge studies in humans. I also think certain animals might be better for studying certain diseases. For example, I am learning that dogs and humans have some of the same mutations that give rise to cancer, making dogs a much better animal model.
Emily Mullin (@emilylmullin) is associate editor for biomedicine at MIT Technology Review.
What should journalists consider regarding the language they use in covering medical studies to be conscientious about individuals included in the study?
Choose the words you use carefully. This seems like basic advice, but it’s key when covering medical studies. It’s important for journalists to avoid simply repeating the terms scientists use. For example, scientists may refer to study “subjects,” to “patients,” or to certain conditions as “diseases” without thinking too much about how dehumanizing that is or whether it robs people of agency. It’s important that journalists choose more respectful and inclusive alternatives, evolving their language as societal definitions change.
At Spectrum, we write about autism, and we struggle all the time with these questions. Following the lead of some advocacy groups, we recently made the decision to call autism a “condition” instead of a “disorder,” for example, and we have always referred to “participants” or “people.” Some people still say our language is too medicalized, but we try to be aware of our choices and revisit our style guide often. The Associated Press offers other guidelines that might be useful.
Apoorva Mandavilli (@apoorva_nyc) is founding editor and editor-in-chief of Spectrum. She is also adjunct professor of journalism at New York University. You can read her writing here.
|