Member? Log in...

Join or renew today


Articles about medical studies

Clinical guidelines

Dose response


Following funding

Guides to reporting

Hazard ratios

Interpreting graphs

Media coverage

Peer review

Screening and diagnostic tests

Sexual abuse, assault and harassment

Understanding bias

Understanding statistics

Websites on statistics, studies and media coverage

Articles about medical studies

Clinical Misinformation: The Case of Benadryl Causing Dementia: Both readers and journalists may sometimes forget to consider whether the findings of a study with a very specific population can be applied to other populations. In other words, how generalizable are the findings? This blog post from the NYU Langone Journal of Medicine, Clinical Correlations, provides an excellent case study on the dangers of extrapolating information from one study with a very narrow demographic patient population to others who do not share that population’s characteristics. 

This brief description of the nocebo effect at Smithsonian Magazine links to a lot of the research on the phenomenon. 

If you’re looking for a good overview of what scientific research looks like — the good, the bad and the ugly — “Science Isn’t Broken” by Christie Ashwanden is an excellent primer digging into P values and statistical significance, ways that bias affects research studies, the role of peer review, why papers are retracted and other key features of the scientific research ecosystem. The article is aimed at science research generally, but every bit of it applies to medical research. An interactive multimedia feature on P value and “p-hacking” helps make an abstract concept more immediately accessible as well.

Do Clinical Trials Work?
An op-ed by Clifton Leaf, author of The Truth in Small Doses: Why We're Losing the Losing the War on Cancer – and How to Win It, published July 13, 2013, in The New York Times.

Bias and Spin in Reporting of Breast Cancer Trials
By Zosia Chustecka. Jan. 15, 2013 in Medscape Medical News

Survival of the Wrongest
An article about how personal-health journalism ignores the fundamental pitfalls baked into all scientific research and serves up a daily diet of unreliable information by David H. Freedman and published Jan. 2, 2013, in the Columbia Journalism Review.

“Lies, Damned Lies, and Medical Science”
A story about the work of Dr. John Ioannidis, from The Atlantic, November 2010

“The Perils of Bite-Sized Science”
Professors from the Universities of Liverpool and Bristol discuss the recent trend to shorten medical research studies.

Clinical guidelines

National Guideline Clearinghouse
NGC was created by Agency for Healthcare Research and Quality (AHRQ) in partnership with the American Medical Association and the American Association of Health Plans (now America's Health Insurance Plans [AHIP]). Its mission is to provide health professionals, health care providers, health plans, integrated delivery systems, purchasers, and others an accessible mechanism for obtaining objective, detailed information on clinical practice guidelines and to further their dissemination, implementation, and use.

Dose response

In “Toxicology: The learning curve,” reporter Dan Fagin reports on the hypothesis some researchers have regarding unexpected dose response effects, such as a dramatic effect from a small dose.

For a detailed explanation of different types of dose response curves, particularly of endocrine disruptors, this 2012 overview from Endocrine Reviews provides an in-depth look: “Hormones and Endocrine-Disrupting Chemicals: Low-Dose Effects and Nonmonotonic Dose Responses.”


You can take a web-based, self-paced course on epidemiology offered by the Centers for Disease Control and Prevention. It does not involve instructors, formal evaluation or CME, but the self-study course includes comprehension checks along the way. Start with Lesson 1 here.

Following funding of medical research

Looking for how much a particular procedure, test or service should cost? The Healthcare Bluebook offers consumers an opportunity to search for what the “fair price” of any healthcare service is based on their zip code. (If you don’t input a zip code, it provides the national average.) The people behind the site describe themselves as “a diverse team of clinicians, healthcare experts, strategists and technologists dedicated to transforming healthcare with transparency.” The site is helpful when you want to provide an approximate cost of a procedure, test or other service in a story.

All vaccines that the CDC recommends are included in the Vaccines for Children program, and these prices are listed on the CDC website. The price list includes a separate chart for vaccines recommended for adults and for flu vaccines. More importantly for reporters, the charts also provide private sector vaccine prices as well.

Tools help reporters follow tax dollars that fund medical research

Grants awarded by the federal department of Health and Human Services can be searched using the TAGGS tool, for Tracking Accountability in Government Grants. You can search for a topic or by state, institution and the name of the investigator.

The National Institutes of Health has a grant searching tool called RePORTER (Research Online Grant Reporting Tools). In addition to the keyword search you can search by funding category, location, and the names of investigators.

Guides to reporting

Tips for interviewing people with disabilities: Covering medical studies often means interviewing people who live with conditions discussed in a study. If you’ll be meeting in person with someone who has a disability, the interview will go more smoothly and productively if you both feel comfortable. This tip sheet from the National Center on Disability and Journalism offers tips on what reporters should do or consider before and during the interview. Although the tips focus on in-person interviews, many of the suggestions could apply to phone interviews as well.

General Google searches are not the best way to find good research. One of many useful reminders from Denise-Marie Ordway’s tip sheet, 10 things we wish we’d known earlier about research, on the Journalist’s Resources blog at Harvard University’s Shorenstein Center on Media, Politics and Public Policy. Ordway, a research reporter/editor at the site (and a former Pulitzer finalist), provides a mix of tried-and-true tenets of medical research reporting, plus some important caveats that rookies and veterans alike may on occasion overlook or forget. This tip sheet can be an excellent refresher review before embarking on a study that relies heavily on medical research.

The difference between science journalism and science communication may seem so subtle as not to be important at first blush, at least to a layperson reader. The difference between the two is crucial, however: one requires a journalist to reports all facts and relevant perspectives on an issue without bias toward the actors involved. The other — science communication — is aimed at communicating science and possibly even science advocacy, often without concern about the people behind the science. In an excellent essay at the Guardian, science journalist Brooke Borel explains the difference using a recent example of conflicts of interest in GMO research. A similar essay by Bianca Nogrady explores the same issue.

“False balance” when covering controversial medical studies: This Columbia Journalism Review article, Sticking with the truth: How ‘balanced’ coverage helped sustain the bogus claim that childhood vaccines can cause autism, is a case study about how misunderstanding what “objective” coverage really entails, It can contribute to public misinformation and misconceptions when the reality is that the evidence for one side of an issue is overwhelmingly greater than contradictory evidence. By always seeking “both sides,” and giving them equivalent weight, article can result in a misleading “false balance.”

Uncertainty is a way of life for scientists, but readers and even journalists are usually less comfortable with it. Sense about Science provides a guide on Making Sense of Uncertainty, which covers how scientists express their degree of confidence about results, how uncertainty can undermine valid evidence and how policymakers and stakeholders make decisions in spite of uncertainty.

NIH Clinical Research Trials and You: Glossary of Common Terms
This guide is not specific to news coverage, but it serves as a handy reference for reporters who might need a refresher in clinical research jargon - or those who are learning for the first time.   

“Covering Medical Research: A Guide for Reporting on Studies,” by Gary Schwitzer. Available online as a slim guide from AHCJ.

Questions to Guide Reporting from the Dartmouth Institute for Health Policy and Clinical Practice

Hazard ratios

Though highly technical and not for the layperson, this journal article, Hazard Ratio in Clinical Trials, offers a deep dive into how hazard ratios are frequently misunderstood or misused, what they really mean, and what clinicians need to understand about them. This link is most helpful to those with a research or statistical background or for a reporter writing about medical studies for trade publications aimed at clinicians.  

Interpreting graphs

"How to Interpret Figures in Reports of Clinical Trials"
Looking at a graph you don't understand? This article from the BMJ explains the four most common types of graphs in medical studies: flow diagrams, Kaplan-Meier plots, forest plots, and repeated measure plots. The full text of BMJ articles is available for free to reporters who register for media access.

Media coverage

Tracking deaths in police custody: For an excellent article on the problems with tracking police-caused deaths each year, this Five Thirty Eight piece, “Nobody Knows How Many Americans The Police Kill Each Year,” goes into detail.  

An Intriguing Link Between Police Shootings and Black Voter Registration

Sticking with the truth: How ‘balanced’ coverage helped sustain the bogus claim that childhood vaccines can cause autism – This article is basically a case study revealing how journalism articles contributed to misinformation and misconceptions in the public about vaccines because of too much reliance on “objectivity.” By always seeking “both sides,” too many articles fell prey to false balance, a pitfall that could occur with other health issues if the evidence for one issue is overwhelmingly greater than contradictory evidence.

Peer review

Although written specifically for early career researchers, the Sense About Science guide “Peer review: the nuts and bolts” (pdf here) also gives an in-depth walk-through of peer review for journalists. Seeing what advice is given to early career researchers about the process can also provide reporters with insights about the goals and processes of peer review.

With a primer on the peer review process, including a short guide overview to peer review, this page also provides a number of other resources from Sense About Science to help journalists wrap their heads around peer review and the issues associated with the process and how it relates to open source publishing.

This Nature profile of Jeffrey Beall, an academic librarian and researcher at the University of Colorado in Denver, provides a good introduction to predatory journals and Beall’s list, one resource — albeit not unbiased — of potentially predatory journals.


Screening and diagnostic tests, from Medscape: If you want to dig more into the differences between screening and diagnostic tests, this overview on Medscape reviews not only those differences but also the difference between specificity and sensitivity and how to understand positive and negative predictive values.

Making sense of screening: Understanding the difference between screening and diagnostic tests and the risks and benefits of screening tests is essential to reporting on them accurately. This page contains a link to a complete guide on screening tests from Sense About Science as well as the basics nicely presented in this slideshow (PDF).

Sexual abuse, assault and harassment

This comprehensive list from Poynter is divided into Research and Records, Writing About Sexual Abuse of Children, Advocacy Organizations, Comprehending Pedophilia, Resources from the Specialized Reporting Institute, Sex Offender Registries, A Perpetrator's Viewpoint and Books, along with a list of experts, Twitter tags, blogs and reporters’ articles.

Those who will be spending a lot of time reporting on sexual abuse of children may want to take the time to go through this 80-minute online seminar, Covering Child Sex Abuse: Lessons from the Sandusky Story, from Poynter. The course is taught by reporter Sara Ganim, who broke the first story on the sex abuse scandal involving Jerry Sandusky, the former Penn State University assistant football coach.

Understanding bias

For a description of lead-time bias with a helpful diagram, check out this link from the Library of Medicine.

Some Effects of "Social Desirability" in Survey Studies

Social Desirability Bias and the Validity of Indirect Questioning

Recall Bias can be a Threat to Retrospective and Prospective Research Designs
Recall bias represents a major threat to the internal validity of studies using self-reported data. It arises with the tendency of subjects to report past events in a manner that is different between the two study groups. This pattern of recall errors can lead to differential misclassification of the related variable among study subjects with a subsequent distortion of measure of association in any direction from the null, depending on the magnitude and direction of the bias. Although recall bias has largely been viewed as a common concern in case-control studies, it also has been documented as an issue in some prospective cohort and randomized controlled trial designs.

How Do You Know Which Health Care Effectiveness Research You Can Trust? A Guide to Study Design for the Perplexed: An article in Preventing Chronic Disease, by Stephen B. Soumerai et al. on June 25, 2015, reviews different types of common bias in studies based on study design, such as healthy user bias, history bias and social desirability bias.

Varieties of bias to guard against: This PDF from gives an extensive overview of 32 different types of bias that can occur in medical research publishing. It is impossible to design a study that contains no bias at all, but there are ways to minimize bias, which this document discusses as well.

Bias in randomized controlled trials: This is a sample chapter from a book which explains the types of bias that can specifically occur in randomized controlled trials. It is a little long, but it’s written in layperson terms with clear subtitles and sections that make it highly readable and accessible.

Understanding statistics

Validity, reliability, and generalizability in qualitative research: To better understand generalizability in a study as well as how to assess the reliability and validity of study findings, this article, “Validity, reliability, and generalizability in qualitative research,” briefly discusses five published studies to illustrate how each of these concepts applies.

To better understand p-hacking, this Nature article dives into the possible statistical errors in research.

While knowing the five basic steps to a systematic review is helpful, this more in-depth article goes into detail on each of the steps.

The eight stages of systematic reviews and meta-analysis (done together) are outlined in detail in this article from the Journal of the Canadian Academy of Child and Adolescent Psychiatry.

Sensitivity and specificity can be challenging to understand, and this article clearly describes the differences between them and how they relate to false positives, false negatives, positive predictive value and negative predictive value. It also walks you through concrete examples.

Forest plot explained: This concise explainer of forest plots offers several helpful visual examples. 

Deciphering a forest plot for a systematic review: This one-page diagram of a forest plot (PDF) identifies the key parts of it and provides a quick reference for journalists.

The Number Needed to Treat calculator provides journalists with a tool for figuring out the NNT even if it’s not reported in a study, as long as the study provides the raw data on outcomes (absolute risk instead of only relative risk).

Sometimes it’s helps to go through an actual lesson plan with sample problems to understand certain biostatistical concepts. “A beginners guide to interpreting odds ratios, confidence intervals and p values” is 20-minute tutorial over those three topics.

Explaining Odds Ratios” explains what odds ratios are and what the mathematical formula is for them, including several illustrative examples.

Don't understand the difference between incidence, prevalence, rates, ratios or other measures of disease occurrence? Check out this helpful cheat sheet from the University of Ottawa in Canada.

Statistics Glossary is an easy-to-use cheat sheet to help you remember what important statistical concepts mean, from the Dartmouth Institute for Health Policy and Clinical Practice.

Compendium of Primers is a collection of articles understanding statistics and statistical methods in medical research. It was originally published by now-defunct journal Effective Clinical Practice, a publication of the American College of Physicians.

The Cochrane Collaboration has put together this entertaining tutorial about P values and statistics.

"Epidemiology and how confounding statistics can confuse," by Marya Zilberberg, M.D., M.P.H.

“News and Numbers, a Writers Guide to Statistics 3rd Edition,” by Cohn, Cope, and Runkel. Wiley-Blackwell. 2011

Know Your Chances: Understanding Health Statistics,” by Woloshin, Schwartz, and Welch. University of California Press, 2008

Websites on statistics, studies and media coverage

The National Center for Complementary and Integrative Health (NCCIH), part of the National Institutes of Health, has launched Know the Science, an initiative aiming to clarify and explain scientific topics related to health research. This effort features a variety of materials including interactive modules, quizzes, and videos to provide engaging, straightforward content 

The Extent and Consequences of P-Hacking in Science

Scientific method: Statistical errors

Bonferroni Correction In Regression: Fun To Say, Important To Do

An in-depth, highly technical explanation of the math and statistics behind Bonferroni correction

When a study doesn’t provide the ‘number needed to treat:’ This Number Needed to Treat Calculator from can provide journalists with a tool for figuring out the NNT even when it’s not reported in a study, as long as the study provides the raw data on outcomes (absolute risk instead of only relative risk).

Berkeley Initiative for Transparency in the Social Sciences (BITSS): An effort to promote transparency in empirical social science research. The program is fostering an active network of social science researchers and institutions committed to strengthening scientific integrity in economics, political science, behavioral science, and related disciplines. Central to BITSS effort is the identification of useful strategies and tools for maintaining research transparency, including the use of study registries, pre-analysis plans, data sharing, and replication.

SearchMedica: This search engine scans journals, systematic reviews, and evidence-based articles that are written and edited for clinicians practicing in primary care and all major specialties. It also selects and scans patient-directed web sites, online CME courses, and government databases of clinical trials and practice guidelines. Independent analysis on nutrition and supplement studies from research scientists, public health professionals and nutritionists.

STATS by George Mason University

Reporting on Health from the University of Southern California Annenberg

Knight Science Journalism Tracker: Daily blog on health and science news from MIT has published 1,889 story reviews and more than 1,700 blog posts about studies, statistics and media coverage.