Shared wisdom

Sometimes all we need is a quick suggestion from our peers or experts to zero in on a good story. Here we turn to front-line journalists and interested experts for advice, some simple insight to add to our repository of “shared wisdom.”

When working on a story that involves a lot of medical research, do you have a system for organizing findings so you can compare and easily recall them when needed?

I annotate studies as I review them using a PDF reader, which makes it easier to go back and pull key findings. I also relabel and organize all the studies I download by year, which I find makes it easier for me to navigate back to studies later when I need to review them.

Ted Alcorn is a journalist and educator with expertise in gun violence prevention policies and programs. He contributes reporting to The New York Times and other national publications, and is a lecturer at Columbia University’s Mailman School of Public Health and NYU’s Wagner School of Public Service.

How do you cope with information fatigue during the COVID-19 pandemic and decide on what you’ll focus on in your coverage?

When I work on a story about COVID-19, I zero in on the information I need for that story rather than trying to vacuum up everything on Twitter with a COVID-19 hashtag. That sort of unfiltered intake leads to bad reporting because you lose your focus.

I would say to everyone that they should employ good information hygiene. Don't feel like you have to spend an extra four hours every day (or very late at night) reading rumors that won't do you any additional good.

Carl Zimmer is an award-winning New York Times columnist and the author of 13 books about science. His newest book is “She Has Her Mother’s Laugh: The Powers, Perversions, and Potential of Heredity." Follow him at @carlzimmer.

What do reporters most commonly get wrong or miss when reporting on medical studies related to cancer?

One of the key elements that reporters often get wrong when reporting about cancer topics, or indeed don't elaborate on enough, is what they mean when they say "cure" or "cured." But scientifically, the word is poorly defined and is often not used without context by medical or scientific professionals. Cure is a very emotive word in cancer and is liberally and frequently used in the media to describe people initially going into remission from their cancer (meaning there is no tumor detectable). It makes great headlines, but is often incorrectly used as people who go into initial remission can, and sadly frequently do, relapse.

In medical communities, cure is seldom used in this context and often only crops up when the person in remission has been cancer-free for several years, and previous studies have shown that their risk of relapse is very low indeed. Terms such as "Five-year event-free survival" are often used instead, and for some cancer types, the risk of relapse is very low after this time, so "cure" can be used with reasonable certainty. But for others, the risk of cancer returning is very high. Related to this, some reporters also poorly grasp that for many people with metastatic (chronic) cancer, cure is not a viable aim and, medically, treatment goals are often to keep the cancer at bay for as long as possible whilst preserving a good quality of life for the patient. These patients will likely never be cured and this leads to erasure of people with metastatic cancer in the media as it strays away from the black and white of "this person was cured" or "this person died from their cancer." The grey area of keeping patients alive while maintaining quality of life  and people "living with cancer" seems to be an area of confusion for some reporters who aren't experienced with covering oncology topics.

Victoria Foster is a survivor of childhood leukemia and a postdoctoral research scientist focusing on childhood cancers and new, targeted cancer therapies. She’s a Forbes contributor and has written for The Times, The Guardian, Cancer Therapy Advisor and various cancer-focused outlets. 

With so many start-ups and health tech companies trying to get journalists' attention, how do you determine which ones really deserve coverage with regards to the research they have to back up their idea/claims?

It can be daunting to decide which health tech ventures warrant coverage because they can be very buzzy and seductive, but there are two rules that have served me well through the years. 1) If their stated objective is to "disrupt" health care, run—don't walk—away (or the modern day equivalent—just go silent). 2) Any coverage should be tempered with healthy journo skepticism and should reflect very real due diligence on both the venture/founders and the story being pitched. Big rounds of VC (venture capital) funding aren't enough because they're often misleading - especially in health care.

With degrees in engineering and journalism, Dan Munro has been covering health care at the intersection of tech and policy for over 10 years. He is an author [Casino Healthcare] and Forbes Contributor with additional bylines at Newsweek, re/code (now Vox), HuffPo, Techonomy, and TedMed. Follow him at @danmunro.

How do you handle an interview where the scientist appears to be over-interpreting, oversimplifying, exaggerating or otherwise not representing their results in a way that seems cautious and balanced enough?

Before interviews, I make sure I've gone through the study's methods section to see what weaknesses or limitations I should ask about. When the scientist seems to over-interpret or exaggerate their results, I'll repeat their interpretation and ask what their colleagues might think about those conclusions. After that, usually the scientist shares some caveats to their findings. I also ask about specific aspects of the experiment that could limit the strength of the findings or ask about alternate interpretations of their results. When I talk to outside experts and discuss the limitations of new findings, I might go back to the study author later and ask more specific questions about those limitations.

Jackie Rocheleau is an independent science journalist covering brain sciences and public health, on Twitter @JackieRocheleau.

If you're covering a topic or condition you have less background/knowledge about, how do you quickly learn what you need to in order to write about it?  

When I’m in unfamiliar territory, I start by looking for a scientific or medical review article on the topic. Other times, I look to see if other science journalists have covered similar topics, and I jot down interesting points I want to follow up on.

While sometimes uncomfortable, I don’t hesitate to ask someone I’m interviewing if they might direct me to an explainer or resource for lay people. Finally, if they can’t explain their research to me clearly—in “muggle speak,” as I put it—I don’t write about that topic or specific person’s work.

 Dr. Judy Stone is an infectious diseases specialist and science journalist. She is the author of “Conducting Clinical Research” and “Resilience: One Family’s Story of Hope and Triumph over Evil.”

How do you find studies that help you break news?

Just as attending meetings off the beaten path can help you break news when attendees talk “among ourselves,” the same is true of finding studies in oft-overlooked journals. So, for instance, I wanted to write about the quiet belief by many hospital executives that hospital-acquired infections were profitable. A study I found in the Journal of Healthcare Management through creative use of Google Scholar enabled me to put that belief “on the record.” Meanwhile, the study concluded that changes in reimbursement meant “targeted improvement” (!) in patient safety could now help a hospital’s bottom line.

Given my own interests in quality, safety, patient-centered care and digital health, I get email updates from BMJ Quality & Safety and the Journal of Medical Internet Research family of publications. In addition, I serve on the editorial board of the American Journal of Medical Quality. I’ll also dip into the websites of some other little-followed journals (e.g., the Journal of Patient Experience or Medical Decision Making) in search of research to inspire or beef up a blog post. But given the often mind-numbing amount of academic dross, you have to be ready for patient digging in order to unearth the occasional nuggets of gold. 

While much (though not all) journal articles lie behind paywalls, the researchers and the journal where the article appeared can typically help you access the full article. 

Michael L. Millenson, a three-time Pulitzer Prize nominee as a reporter for the Chicago Tribune, is now a blogger, consultant and adjunct associate professor of medicine at Northwestern University’s Feinberg School of Medicine.

How can you tell how generalizable a health care policy or economics study from another country is to the U.S., or vice versa?

I've lived and worked in both the U.S. and Europe (The Netherlands and U.K.). When doing comparative analyses, context is very important. So, for example, in comparing the British NHS to the U.S. health care system, it's important to first point out that the U.S. doesn't have one system. It's highly fragmented. So, there may be elements in the U.S. system that compare favorably in terms of mortality and morbidity indices while the average isn't as good. And, in generalizing or importing policy solutions from one country to another the context caveat is even more important. In some cases, importing a certain health care policy can't be done because the system won't allow for it.  

Joshua P. Cohen, Ph.D., is an independent health care analyst, freelance writer and teacher. His specialties are healthcare policy, public health and drug pricing and reimbursement.

What’s an important aspect of covering studies that journalists commonly overlook?

“Some journalists tend to treat all study designs equally and are not aware of the hierarchy of evidence, or the fact that some designs are much stronger for making causal inference than others. For example, there are now several randomized controlled trials that show that hydroxychloroquine is not effective against COVID-19. In the early phase of the pandemic, seriously flawed ecologic and observational studies were used to make claims about its value. So, pointing out the biases of observational studies should be a critical aspect of reporting on medical studies.”

Madhukar Pai, M.D., Ph.D. (@paimadhu) is a Canada Research Chair of Epidemiology & Global Health at McGill University, Montreal, where he serves as associate director of the McGill International Tuberculosis Centre. He writes for Forbes and is teaching an epidemiology 101 course for journalists this summer.

How can humor help in explaining complex scientific or medical topics?

Early in my career, people asked me whether I wanted to go into something "fun" like the arts or something "serious" like medicine. My thought has been, why can't it be both? Why not combine the two and make medical and science issues more accessible and more memorable by making them even more fun, or more pun, for that matter? Even a bad pun (are there such things?) can help you remember a scientific concept or information. As long as you communicate real, evidenced-based, scientific information, why not keep things fun for the reader and you? A former university colleague once warned that if I continue to write about things like "vibrating yoga pants," I would not be taken seriously as an academic, as a professor, despite writing about many other things and become known as the vibrating yoga pants guy. That wouldn't necessarily be the worst thing in the world.

Bruce Y. Lee is a writer/journalist who is a Forbes senior contributor and has written for The New York Times, The Guardian, STAT, HuffPost, and other media outlets as well. He is also currently a Professor of Health Policy and Management at the City University of New York (CUNY) School of Public Health, where he is Executive Director of CATCH (the Center for Advanced Technology and Communication for Health) and PHICOR. For more info visit bruceylee.com. Follow him at @bruce_y_lee.

How do you decide when a health policy study is worth writing about and how do you approach it?

I want to know if it’s something that’s a trend and whether it works. We hear a lot about social determinants of health right now, and I want to know if there’s a study showing something is working that’s been out there. I’m less interested in what we already know, such as Medicaid work requirements causing people not to receive care.

My wise old editors back when I was a young reporter at the Des Moines Register and Chicago Tribune would say, “What’s sexy about it?” I want to know, in real practice, is it working? Is some of the value-based care working? Also, really look at the dates [of the data] because, especially in health insurance, they’re talking about real time data — they know right away when you’re getting your prescription, whether you’re taking your medications, etc. Anything more than a couple years old is usually worthless. I’m looking for information as fresh as possible.

Bruce Japsen is a Forbes senior contributor who has covered health care for three decades and teaches in the University of Iowa School of Journalism MA in Strategic Communication program. He occasionally appears on Fox Business News and on WBBM News Radio 780 and 105.9 FM. Follow him at @BruceJapsen

What do you look for in a study to determine if it’s good enough to cover?

I look up the clinical trial in ClinicalTrials.gov and check to see if they changed or added endpoints — and I make sure to report what they were looking for (whether they found it or not), not just the incidental finding that made for a better headline. And more generally, does the outcome make sense for the question they're trying to ask? For example, triggering a blood sugar fluctuation is not the same thing as preventing diabetes or causing weight loss. If they're analyzing a ton of data, I make sure they say in the methods what they're doing to correct for multiple tests — because if they're not, the whole paper could be garbage. I also check out key premises mentioned in the introduction (such as why this topic is worthy of study or some basic fact they're assuming is proven). Sometimes it's a rabbit hole of citations ending with one author citing his own opinion.

Beth Skwarecki (@BethSkw) is a freelance health and science writer who has covered studies for Lifehacker, Medscape, ScienceNow, and Scientific American.

How do you judge when to report on press releases about developing drugs?

Companies may need to issue news releases (who has a “press” anymore?) to satisfy regulatory requirements to keep investors and the markets informed. Whether we report on the releases or not, they have done what is required. So, as Tara mentioned, we do not have to report on every tidbit, and when we do choose to report on something, we need to keep it in proper perspective as to the source, the stage of development, efficacy vs adverse effects, any financial relationships that commenters may have, and the context vis-a-vis other similar or competing goings-on in the field. Also, remember that entities pitching stories offer up only happy, live patients. They generally don’t march out the dead ones.

Dan Keller specializes in writing and reporting about basic and clinical medical research and related biotechnology for health care professionals, patients, and the public. He holds a doctorate in immunology and microbiology and has additional research experience in hematology and neuroendocrinology.

What best practices are you following in covering preprints during the pandemic?

I've always sought independent comment on the preprint studies that I've considered covering, but I feel that this pandemic has made me much more vigilant about ensuring the quality of preprints that I report on. These studies that have not undergone a formal vetting by other scientists for the quality of their design and results: so, for starters, asking other researchers to comment on the quality of the methodology of the paper can be very informative. Context is very important when mentioning preprints in stories. I try to give readers a sense of what other papers in the same field have found and whether those conclusions are in line with the preprint findings.

During this pandemic, there’s been an even bigger push to get stories out fast, and some coronavirus sources have been more difficult to reach because they have been so busy — I’ve had to work harder as a reporter because of this. But it’s important not to get swept up in a preprint and run with it without vetting it properly. We shouldn’t let journalistic standards become another victim of this pandemic.

Roxanne Khamsi is a science writer based in Montreal, Canada. You can follow her on Twitter at @rkhamsi.

How do you distinguish between scientific papers that have been peer reviewed and those that haven’t to readers, many of whom may not really understand the distinction?

It's important for scientists to be clear on those types of caveats, and for reporters to include them in their stories. Pre-prints are a double edged sword, though I think on balance one edge is sharper than the other. It can take weeks or months for a journal to publish a paper, especially at a time like this when they are flooded with submissions.

I can think of one important paper, on viral shedding of Covid-19 patients, that was on a pre-print server about a month before it was published. Before pre-prints, the data papers in the pipeline contained was hidden from view. The world needs access to that data now; publishing it in pre-print form is helping people understand the transmission dynamics and risks of this virus . But there have been bad papers published as pre-prints that have been major distractions. One at least was withdrawn, but it's impact remains.

I think the benefits outweigh the risks still, but there are definitely downsides to the wide scale reporting on pre-prints in this pandemic. Reporters really need to make clear these aren't yet peer-reviewed results.

Helen Branswell is a senior writer covering infectious diseases and global health for Stat. She can be found on Twitter at @Branswell.

As a journalist, how do you communicate changing understandings, predictions and recommendations without undermining your credibility?

This is one of the hardest parts about covering science as a journalist — explaining again and again that it's an iterative, self-correcting process, and that even when everybody does everything right, some findings won't hold up over time (not to mention fraud, p-hacking, file drawer effect, etc.).

I think it can be respectful to readers and can defuse some frustration and confusion if we say more explicitly: what researchers do know today, why that's different from what they thought yesterday, and what they still don't know and hope to find out tomorrow.

Laura Helmuth is the new editor-in-chief at Scientific American after leaving her position as health, science & environment editor at The Washington Post. She previously edited at National Geographic, Slate, Smithsonian and Science and was President of the National Association of Science Writers from 2016 to 2018. Follow her @laurahelmuth.

How are you finding diverse sources and avoiding using the same people again and again for your coronavirus/COVID-19 stories?

Most of my stories are different enough, so I don't run into re-using experts. One trick I've used is, if I interview a source for a particular story, I'll reach out to them again for a separate story and see if there are other folks they would recommend me to talk to. They're always paying it forward, in some way. And I always make it clear that I'd love to speak with women or people of color.

Wudan Yan is an independent journalist in Seattle who has been covering coronavirus for Huffington Post, MIT Tech Review, The New York Times, Science and more. Follow her @wudanyan and see her coronavirus reporting here.

Why is it important to be sure the person you use as a source has the knowledge, experience, training, etc. in the specific topic area you're writing about as opposed to a generalist?

When it comes to ensuring someone has expertise specifically in the area I need, it depends on the topic how I assess whether they have the knowledge and experience necessary. For example, almost any epidemiologist could comment on some things, such as the basics of study design or general types of bias, that run through the general epidemiology curriculum. But we have our own niches and specialties just like any other field. Asking someone to comment on an area that’s outside of their particular niche runs the risk of interviewing someone with only a superficial understanding of the topic.

They might still be able to address it, but they’ll likely lack the depth to be able to put new findings into context or discuss the history of a particular area and how any new information changes the field. They might not know how well a new publication or research finding is accepted by others in the field, or whether it’s controversial and contradicts other published literature. They might not know if the group or person doing the research is reputable or has a history of poor studies or paper retractions.

You’re just opening yourself up to unforced errors if you choose an interviewee without solid knowledge of the niche you’re writing about.

Tara C. Smith is a professor of public health specializing in epidemiology and infectious diseases at Kent State University. She’s a columnist for Self.com and writes freelance articles for a wide range of other news publications. Follow her at @aetiology.

When a fast-moving, high-profile public health story is unfolding, what do you do to ensure the experts you interview are appropriately qualified for the topic? 

In general terms, when I am looking for story sources, I go to PubMed and try as many keywords as I can think of to see what pops up. Then I look for how frequent, how recent, and who the co-authors are — are they names I recognize? I will also look at their faculty pages.

It’s important to take some time to do this, even in a fast-breaking story. Hypothetical example: If the CDC comes out with startling news about a “vectorborne” disease, you had better do enough of a read on your results to separate the mosquito people from the tick people, and the human disease people from the animal people, or you will waste a lot of time emailing.

I also Google to see whether those people have been interviewed, and also, whether they have been interviewed too much — I don’t want to be their 1000th interview, and I don’t want to be copy/pasting what they have said elsewhere. 

As a reporter who specializes in emerging infections and outbreaks, I feel a special responsibility to avoid showcasing inflammatory language — it’s click-attracting but I think it is harmful to our mission of informing the public. (Sorry, traffic gods.) So when I Google to see whether and how much possible sources have spoken, I am also looking for the quality of their expression. On a spectrum of not-descriptive to OMG, I try to pick people who land in the middle.

Maryn McKenna is a freelance journalist who covers public health, global health and food policy. She is @marynmck on Twitter.

How do you find patients to report on how the research affects people?

Find real people to illustrate the real-life impact of cancer research. You can try patient groups, but most no longer comment on drug prices, perhaps because they now get so much of their funding from the pharmaceutical industry. Doctors and hospital also receive industry support and may not comment on drug prices, either. Check out this helpful (but small) list of patient groups that don't take industry funding.

Check out patient and consumer forums on Twitter, Facebook and other social media. Establish a presence in these communities a few weeks or months before you begin asking questions, so you can understand how they work. If you can't find a patient forum that fits your needs, create your own.

We created a Facebook discussion group after our story debuted, and comments helped to fuel additional stories. While it's important to have data for a story like this, it's also helpful to tell your story through main characters who can illustrate the policies and the emotional and physical side effects of the cancers you're describing.

A groundbreaking series by reporter Liz Szabo of Kaiser Health News found that many cancer drugs, most in fact, are abject failures. They are overhyped, overmarketed, and fraudulently advertised. They cost so much that patients often quit taking their medicines, foregoing treatment until death.  

What's a simple rule of thumb for deciding whether to cover the introduction of a legislative bill related to health/medicine or medical research?

Here’s a good science analogy. Covering a bill's introduction is like covering results of a Phase I or II trial.

It's sometimes worthwhile depending on the audience and product, etc., but you need to be careful how you write the story so as not to paint the picture that a law is close at hand when it’s more likely to fail.

Sarah Karlin-Smith is a health care reporter at Politico, specializing in the policy and politics that affect the pharmaceutical industry and patients needing medicine.

What are some tips that make coverage of a medical conference easier?

1) Read abstracts, and make sure they’re the ones relevant to your coverage. Having general questions can help, but in my experience, researchers (especially doctors) REALLY appreciate it when they talk to journalists who actually know their stuff, ask informed questions and don’t have to get them to “dumb down” what they say.

2) Go to poster sessions. These are great if you need a few “on the street” comments, and the researchers are usually eager to talk about their work.

3) In oral sessions, sit as close to the podium as possible. This way you can rush the stage before anyone else.

4) Schedule sit-down meetings physically close to each other and to where you and/or the interviewee needs to be. This goes a long way to minimizing travel time. If possible, schedule meetings about 10-15 minutes apart. That way, you have a little bit of wiggle room, and that’s enough time to walk fast at most convention centers. For example, it’s about a 10-15 minute walk between the press room and the exhibit hall at ASCO, held at McCormick Place in Chicago, North America’s largest convention center.

5) Wear comfortable shoes.

Alaric DeArment is a senior reporter covering biopharma at MedCity News and has covered the industry and health care for more than 10 years. His Twitter handle is @biotechvisigoth.

What is the risk of using words such as 'may' or 'might' in a headline about a medical study?

The fundamental problem with using words such as "may" or "might" in a headline is that it conveys very little actual information for readers and has the potential to mislead them. For example, if a headline says that "Therapy X May Be The Solution To Health Problem Z," it could just as easily say "Therapy X May Not Be The Solution To Health Problem Z." In other words, it's not giving readers much actual information. (Can you imagine a headline that reads "Therapy X May Or May Not Be The Solution To Health Problem Z"? I can't either.) What's more, readers could easily read more into the language than you (or your copy editor) intended – translating "Therapy X May Be The Solution" into "Therapy X Is The Solution." This is particularly true for readers who are experiencing the relevant medical problem or have loved ones who are. 

So, what's a headline writer to do? Try to be as specific as possible, based on the details of the relevant research. For example, "Study Finds Therapy X Reduced Symptoms For Some Health Problem Z Patients." Your mileage may vary, depending on the details of the study (and the amount of headline space you have to work with), but it's certainly worth thinking carefully before incorporating "might," "may" or "could" into your headlines.

Matt Shipman is the Research Communications Lead at North Carolina State University and author of the Handbook For Science Public Information Officers. He is a former contributor to HealthNewsReview.org.

What is one of the most valuable metrics in medical research that health journalists should pay attention to?

Medical studies frequently use relative risk to express the difference in outcome rates between an intervention and control group. A 5 percent event rate in the control group compared to a 4 percent event rate in the intervention group would lead to a relative risk reduction of 20 percent — pretty impressive! But the absolute risk reduction in this case (1 percent) may be more informative. The biggest secret in medicine, in my opinion, is that for most interventions, these absolute risk reductions are quite small. That 1 percent absolute risk reduction above? That means you'd need to treat 100 people to avoid one bad outcome. Or, put another way, you'll treat 99 people unnecessarily. The catch is that it is really hard to figure out who the one special patient will be — so we end up treating everyone. The number needed to treat (NNT) brings this all into perspective and allows us to make informed decisions: Am I willing to take on the risks of a new medication (be it in terms of dollar costs or side-effects) for a small chance of a large benefit? That's a key discussion for each patient to have with his or her doctor.

F. Perry Wilson, M.D., M.S.C.E., is an assistant professor of medicine in the Section of Nephrology at Yale University School of Medicine’s Program of Applied Translational Research. Check out his YouTube channel on reporting medical research and follow him on Twitter at @methodsmanmd.

What is most important for journalist to keep in mind when covering a nutrition health story?

The most important thing for journalists to remember when covering a health story is that their coverage also influences broader societal scientific literacy.

If journalists cover a medical study where the study involves in-vitro models, animal models, or surrogate endpoints, and consequently there is no clear clinical relevancy or outcomes, if not done exceedingly cautiously and with real nuance, they provide oxygen to the fire of medical quackery whose lifeblood is research that suggests a remote possibility of benefit, but which is sold as life-changing.

Yoni Freedhoff, M.D., (@YoniFreedhoff) is an assistant professor of family medicine at the University of Ottawa and the founder and medical director of the Bariatric Medical Institute.

What's the most important thing for journalists to look for when covering medical studies related to dementia?

There are reams of studies purporting to link dementia risk with myriad factors, like eating (or not eating) certain foods or even how many children a woman has. As we all should know, association is not causation. Just because two events both occur in a given time frame doesn’t mean they’re necessarily connected. The literature is filled with poorly controlled clinical trials, questionable data and results taken out of context.

 It’s very difficult to evaluate dementia patients accurately because there’s no simple, inexpensive assessment, like a blood test, available yet. PET scans can pinpoint buildup of plaque in the brain, but they’re expensive and usually not covered by insurance. Scientists are working on pinpointing genetic biomarkers, similar to how some cancer patients are screened, but aren’t there yet.

 As journalists, we have a responsibility to keep asking hard questions about studies that link external factors with this disease. Make sure the evidence is solid; that the study was large enough to be statistically significant, and that subjects were appropriately screened, e.g., with brain scans, which provides a quantifiable measure of change. Cognitive screening tests are much more subjective and while important, can’t offer the same type of hard data. Journalists not only need to understand the different evaluations available for those with cognitive impairment but to also understand that dementia is not one disease, and manifests differently in every individual.

Liz Seegert (@lseegert) is AHCJ's topic leader on aging and an independent journalist whose reporting and writing background spans more than 25 years in print, broadcast and digital media. 

What is the most important thing that health journalists can do to improve their reporting of medical research?

I would probably suggest stop reporting association studies for nutrients and foods. Have a moratorium. I think that they can ask for new approaches and what else is done that is different compared to what has been done all along. If we can give some breathing air, we can see some new approaches. The truth may be intangible, but at least we will not be misled.

I think we need to take a step back and not make assumptions that we have managed to measure everything. Respect the complexity and try to dissect that complexity and see if it is dissectible. We should probably avoid making recommendations and telling people eat this and that and not eat something else; it’s just premature.

John Ioannidis, M.D., D.Sc. is a professor of medicine and health research and policy at Stanford University School of Medicine in California.

What should journalists particularly pay attention to or ask about when covering a medical research study related to nutritional supplements (vitamins, minerals and other supplements)?

First and foremost, look at funding sources for any research on dietary supplements (or food additives, or even foods or food groups, for that matter). Look out for funding from supplement companies, industry groups and nonprofits who may be biased towards one outcome or the other. These conflicts of interest don't mean the study is worthless, but they should heighten your skepticism when evaluating the study, and you should ask study authors and your outside experts about them.

Beyond that, pay close attention to the supplement dose used and put it in context for your readers. If the supplement is a vitamin or mineral, compare the dose to the Dietary Reference Intake (DRI) values. How does the dose compare to what you might find in a balanced diet? If it's way above that, it's a pharmacological dose, and that's worth highlighting for your readers. The NIH Office of Dietary Supplements is an amazing resource!

Alice Callahan (@scienceofmom) has a doctorate in nutrition, helpful background for her work as a freelance health reporter and as the author of "The Science of Mom: A Research-Based Guide to Your Baby's First Year."

As a physician source, what do you find to be most helpful as a journalist interviews you?

I am happiest when a young reporter is honest with me and says, "I don’t really understand this issue." We both have same goal of getting good information out, so let me know how I can help you.

I try to provide written material (one page or less) on the topic, websites with helpful information that patients/families can use, where a reporter can link to in the story, and when appropriate, families that can speak to the issue.

Elizabeth Murray, D.O., M.B.A., is board-certified in pediatrics and pediatric emergency medicine. She works with the media regularly and is part of People Magazine’s Health Squad. Twitter: @DocEMurray

What's the first or number one way you look for the possibility that a study involves p-hacking?

I search in the page for the word "multiple" to see what they say about how they adjusted for multiple comparisons. I hope to see a thoughtful explanation of how they adjusted and why. If they say there's a good reason why they didn't adjust, I'll ask a statistician or other outside source about it. But often there is no explanation or sometimes no mention at all – even after I read through to see if they discussed it in other terms – and that's a major red flag.

Beth Skwarecki is the health editor at Lifehacker. She lives in Pittsburgh, Pa. Follow her on Twitter at @bethskw.

What do you do when you come across an animal study?

Emily MullinIf it's a study on a cancer drug in mice, skip it. Every other day mice are cured of cancer in a lab. I do think there are certain times when animal studies are really important though, such as monkey studies testing novel therapies or those evaluating drugs for diseases that currently have no treatment.

Also, with some diseases, like Ebola, it's near impossible and highly unethical to do challenge studies in humans.  I also think certain animals might be better for studying certain diseases. For example, I am learning that dogs and humans have some of the same mutations that give rise to cancer, making dogs a much better animal model.

Emily Mullin (@emilylmullin) is associate editor for biomedicine at MIT Technology Review.

What should journalists consider regarding the language they use in covering medical studies to be conscientious about individuals included in the study?

Apoorva-MandavilliChoose the words you use carefully. This seems like basic advice, but it’s key when covering medical studies. It’s important for journalists to avoid simply repeating the terms scientists use. For example, scientists may refer to study “subjects,” to “patients,” or to certain conditions as “diseases” without thinking too much about how dehumanizing that is or whether it robs people of agency. It’s important that journalists choose more respectful and inclusive alternatives, evolving their language as societal definitions change.

At Spectrum, we write about autism, and we struggle all the time with these questions. Following the lead of some advocacy groups, we recently made the decision to call autism a “condition” instead of a “disorder,” for example, and we have always referred to “participants” or “people.” Some people still say our language is too medicalized, but we try to be aware of our choices and revisit our style guide often. The Associated Press offers other guidelines that might be useful.

Apoorva Mandavilli (@apoorva_nyc) is founding editor and editor-in-chief of Spectrum. She is also adjunct professor of journalism at New York University. You can read her writing here.

How and/or when do you decide that a medical study you were planning to cover actually shouldn’t be covered?

Heather BoernerUsually I look at a few things: Some abstracts don’t list the total number of patients in a study and then you look at the full text and there are only six participants. There was one study that garnered a ton of attention in HIV circles that was based on the experiences of just two or three participants. I chuck those. And then I also look at who funded the study. Usually at the end, there will be a section on disclosures. If a pharmaceutical company funded a glowing study, it doesn’t mean I won’t cover it, but I think about what it adds to the conversation and am sure to mention funding. In some cases, I’ve avoided summarizing yet another positive study funded by the same pharma company over and over again. I need to ask if this is adding anything to the conversation.

Heather Boerner is a freelance journalist and editor and author of Positively Negative: Love, Pregnancy, and Science's Surprising Victory Over HIV. Her work has appeared in The Washington Post, The Daily Beast, Al Jazeera America and the Atlantic, among others. Follow her @HeatherBoerner.

What kind of biostatistical pitfalls should reporters watch out for when reporting on medical research?

Emily WillinghamDon't conflate odds ratios (OR) and hazard ratios/relative risk (HR/RR). Use real numbers. Explain that 2 out of every 10,000 people will have a condition or adverse event instead of saying that the condition or event is X% more likely. If something is 50% more likely to happen with Thing Y, but it only happens in 4 out of 100,000 people when Thing Y isn’t involved, then the increased risk actually translates to just 2 additional people out of 100,000 (a total of 6 people out of 100,000). Watch out for p-hacking as well. If you see what looks like a bunch of statistical fishing expeditions in the paper and want someone to tell you if that's what you're seeing, ask a stats person you trust to read over the analysis or double-check what you found.

Emily Willingham, Ph.D., is a science journalist whose work has appeared in The New York Times, Slate, Discover, and other publications. She is a Forbes contributor and co-author of the book The Informed Parent: A Science-Based Resource for Your Child's First Four Years. She can be followed at @ejwillingham.

What is the most important ethical guideline to keep in mind when covering medical research?

Andrew M. SeamanAll journalists should be accurate and truthful, but I think it’s especially important that health journalists internalize the idea of minimizing harm. We are tasked with reporting the most personal details of people’s lives, and on topics that can immediately impact their wellbeing. Before publishing or broadcasting a health story, journalists need to think beyond its immediate impact and consider how people will be affected for years to come through online archives. Journalists should be especially cautious about how those featured will be impacted by the story, including individuals, families and groups. Health stories have the ability to empower people, but they also have the power to stigmatize. The bottom line is to think of harm holistically.

Andrew M. Seaman (@andrewmseaman) is the senior medical journalist with Reuters Health in New York City. He is also the chair of the ethics committee for the Society of Professional Journalists, which revised its decades-old Code of Ethics in 2014. AHCJ embraces the SPJ Code of Ethics in its statement of principles.

What two critical details do you consider in looking at PR-hyped animal studies?

My two key points would be:

Sample size is key. If the study shows 90 percent efficacy but only had a sample size of four animals, the press release will call it revolutionary and groundbreaking... but a journalist shouldn't.

Also, what is the study animal? Is the study on mice? Mice aren't human, and many disease models aren't natural diseases of mice or don't behave similarly in other species.

Elizabeth Devitt is a freelance science journalist in her second career after being a veterinarian. She writes about the environment, animals, medicine and everything that connects animals to humans. Her work has appeared in National Geographic News, ScienceNOW, Nature Medicine, Cancer Discovery, San Jose Mercury News and the Bay Area Monitor, among others. Check out her website or follow her on Twitter at @elizdevitt.

How do you determine whether to cover a study or not?

Choose whether or not to cover a study not only based on its findings, but on where it was published, who it was funded by, and the general quality of its methods. Read the actual study. Simplify for audiences when necessary, such as simplifying mechanisms with analogies, but please do not simplify the conclusions to raise newsworthiness. Take pride in reporting small findings within the context that science continually evolves … a medical story can be meaningful and interesting without needing to be groundbreaking. Be humble.

Hanna Saltzman (@hannasaltzman) is a health journalist, researcher and organizer. She currently works as a research analyst at the University of Utah School of Medicine and is writing a book that aims to bring basic physiology concepts to a mainstream public.

How do you find an email address for a researcher who does not have it posted?

Tara HaelleSometimes I want to speak with a specific researcher, but they do not have their email address posted on their institution’s bio page (sometimes because they are also a practicing clinician who may not want to encourage patient emails), and I may not have time to track down the PIO if it’s not someone I know. I’ve found the best way to find these emails is PubMed. Most of these researchers have been a corresponding author on at least one paper, and a search of their name lets me check under the author list to see if they are the corresponding author on any papers they’ve authored. So far, this method has never failed me.

Tara Haelle (@TaraHaelle) is AHCJ's medical studies core topic leader. She is a freelance journalist and multimedia photographer who has particularly focused on medical studies over the past five years. She specializes in reporting on vaccines, pediatrics, maternal health, obesity, nutrition and mental health.

What makes a good anecdote in a health story?

If readers see themselves, or someone they love, in the person’s story, that’s a good anecdote. Reporters need to look for characters, not just quotes. A good anecdote dramatizes a situation rather than simply describing it, but it also illustrates the larger story while conforming to — not contradicting — the evidence. Inappropriate anecdotes are those that are not part of any trend and which are unsupported by the evidence or outright contradict the evidence base, such as Jenny McCarthy’s use of her son Evan to suggest that vaccines cause autism, a “poster child” for using an anecdote irresponsibly because it goes against the evidence.

Liz Szabo has covered medical news for USA Today since 2004. Her work has won awards from the Campaign for Public Health Foundation, the American Urological Association and the American College of Emergency Physicians. Szabo worked for the Virginian-Pilot for seven years, covering medicine, religion and local news.

What kinds of misunderstandings can contribute to distrust between journalists and researchers?

Research is cautious and leaves room for new information or even for being wrong; it moves along incrementally with small advances. The media tends to want big definitive statements and jaw-dropping breakthroughs. That gap is difficult to bridge and leads to an avalanche of misreported findings, which then makes researchers loathe to talk to journalists. If a journalist can put the findings in context without exaggeration and make connections to everyday life or human culture, it's more interesting AND accurate.

Molly Gregas’ broad interests in science and communication stem from growing up in a family of writers, teachers and academics. She earned her PhD in biomedical engineering from Duke University and spent several years in research before immersing herself in a variety of science-related communication, education and outreach initiatives. She works as a writer, editor and research communication specialist and is based in Toronto, Ontario.

What part of a study may be overlooked — but shouldn’t be — by journalists?

Never ignore the section on the limitations of the study. Always read the whole study. There's often a lot of interesting information packed into the methods section, etc.

Elizabeth DeVita-Raeburn writes primarily about medicine, science and psychology. Her new book, The Death of Cancer, will be published by FSG in November 2015. Follow her on Twitter at @devitaraeburn

What’s your advice for a brand new reporter to covering medical studies that veterans may take for granted? 

This might go without saying for most of us, but I think is worth repeating for people new to the beat: Get your hands on the actual study and call up the researcher; don't just read the press release. Press releases sometimes exaggerate or suggest news hooks that don't really represent the research. Also, balance your story by interviewing an expert who wasn't involved in the study.

Tracy Miller (@MillerTracyL) has reported on health and medicine as a senior digital editor for Prevention magazine and the New York Daily News.

What is the most important point for reporters to convey in covering observational/epidemiological studies?

Amy Vidrine
Amy Vidrine

Correlation is not causation. Repeat. Keep repeating. I see too many reports that say that two things are correlating, and therefore one causes the other. This is usually not the case.

Amy Vidrine has been a research scientist for over 10 years, in microbiology, molecular biology, and biochemistry. She has recently started writing fiction and can be found on Twitter.

What should reporters keep in mind when reporting on the findings of just one new study?

Olivia Campbell
Olivia Campbell

Single findings should be viewed in the context of the bigger picture of all other findings on the subject. Findings frequently contradict each other. Differences may be because of study methods or sample size/demographics, or because of flaws in either study.

I always try to find recent review articles that can accurately describe that bigger picture. If it's confusing or radical, ask the researcher or another expert — they can also provide lay-worded context and scale to the finding.

Olivia Campbell (@liviecampbell) is a freelance journalist whose writings on medicine and mothering have appeared in Pacific Standard Magazine, Brain, Child Magazine and The Daily Beast.

What are the journalistic red flags with epidemiology statistics?

Alex Wayne
Alex Wayne

Journalists should be very careful with epidemiology statistics – in particular, prevalence.

To use one very controversial example, the prevalence of autism spectrum disorders has increased from 1 in 150 children a decade ago to 1 in 88 now, according to the CDC. That statistic doesn't tell us whether the condition is more common than it was a decade ago, only that it is more frequently diagnosed. (Which may be the result of better screening and an expanding definition of ASD, not higher incidence.)

Alex Wayne (@aawayne) writes about health care policy for Bloomberg News.

What antenna go up when you see a press release about a study with remarkable findings?

Markian Hawryluk
Markian Hawryluk

Never trust the press release about a study. If they’re to be believed, we’ve cured cancer, Alzheimer’s and the common cold. Get the study and read it for yourself.

Markian Hawryluk is a health reporter with The Bend (Ore.) Bulletin. He spent 15 years as a health policy reporter in Washington, D.C., writing for trade publications. He has won multiple awards for his health reporting, including the Bruce Baer Award, Oregon’s top prize for investigative journalism. Last year, he was a Knight-Wallace Fellow at the University of Michigan and is a member of AHCJ’s 2013-14 class of Regional Health Journalism Fellows. He recently reported on a local clinic that decided to kick out the drug reps – and how it changed their practice.

How do you get researchers to open up during an interview?

Genevra Pittman

Genevra Pittman is a medical journalist for Reuters Health in New York. She is a graduate of Swarthmore College and New York University’s Science, Health and Environmental Reporting Program. When not writing and reading about health and medicine, she runs, roots loudly for Boston sports teams and plays fetch with her cats.

When covering a study, I try to avoid asking researchers to sum up their findings for me or ask what they think is most important about the research right off the bat. Rather, I ask about specific numbers or suggest something I thought was interesting in the findings. Many researchers, as sad as it is, perk up when they realize you actually read their study and were interested in it, and aren’t just calling them based on a press release headline.

It can also be helpful to ask a source, "What is most important for patients, or their family members, to know about this?" That can get researchers out of medical jargon speak, if you're writing for a consumer audience like I usually do.

I always like to end an interview with the question, "Is there anything else you would like to highlight?" Some sources won't have anything to say, but others will rephrase an earlier point in a helpful way, or bring up a research or policy implication I hadn't thought about. Either way, they often appreciate being asked!

What advice do you give your staff about finding and reporting absolute risk in medical studies? 

Note: Absolute risk is a person’s risk of developing a disease over a given time period.  It’s important to report absolute risk alongside a study’s reported relative risks to keep the benefit of a test or treatment from being exaggerated in readers’ minds.

Ivan Oransky, M.D.

Ivan Oransky, M.D., is executive editor of Reuters Health and AHCJ’s treasurer. He blogs at Embargo Watch and Retraction Watch. He previously was managing editor for online at Scientific American and deputy editor of The Scientist. He also wrote one of AHCJ’s most popular blog posts, about reporting risk: “Tanning beds: What do the numbers really mean?

• In the best-case scenario, a study spells out what the absolute risks of a given condition were in the treatment and control group (just for example). That just requires noting it in the story.

• In a close second-best scenario, the text doesn’t note the absolute rates, but a table or figure – usually Table 2 – notes the percentages of each group, or the absolute numbers, that had the condition in question. Pick a few representative numbers and include them.

• Sometimes it is more difficult to tell. For example, when a cohort is stratified into quartiles based on a risk factor, and each group has a higher percentage of a given condition than the next, it may be necessary to give the overall number of subjects with a given condition. That’s not ideal, of course, but it at least gives some context for “25 percent greater,” etc.

• Finally, sometimes, the report does a particularly bad job of highlighting absolute risk, and does none of the above things. That probably suggests some weaknesses, but in this case, we can note that the study did not include an absolute risk, but go to a source like Medline Plus to be able to find a general population figure for the condition. Again, that at least gives some context.

  — excerpted from AHCJ's slim guide by Gary Schwitzer, “Covering Medical Research