Clinical Misinformation: The Case of Benadryl Causing Dementia: Both readers and journalists may sometimes forget to consider whether the findings of a study with a very specific population can be applied to other populations. In other words, how generalizable are the findings? This blog post from the NYU Langone Journal of Medicine, Clinical Correlations, provides an excellent case study on the dangers of extrapolating information from one study with a very narrow demographic patient population to others who do not share that population’s characteristics.
If you’re looking for a good overview of what scientific research looks like — the good, the bad and the ugly — “Science Isn’t Broken” by Christie Ashwanden is an excellent primer digging into P values and statistical significance, ways that bias affects research studies, the role of peer review, why papers are retracted and other key features of the scientific research ecosystem. The article is aimed at science research generally, but every bit of it applies to medical research. An interactive multimedia feature on P value and “p-hacking” helps make an abstract concept more immediately accessible as well.
Do Clinical Trials Work? An op-ed by Clifton Leaf, author of The Truth in Small Doses: Why We're Losing the Losing the War on Cancer – and How to Win It, published July 13, 2013, in The New York Times.
Survival of the Wrongest An article about how personal-health journalism ignores the fundamental pitfalls baked into all scientific research and serves up a daily diet of unreliable information by David H. Freedman and published Jan. 2, 2013, in the Columbia Journalism Review.
National Guideline Clearinghouse NGC was created by Agency for Healthcare Research and Quality (AHRQ) in partnership with the American Medical Association and the American Association of Health Plans (now America's Health Insurance Plans [AHIP]). Its mission is to provide health professionals, health care providers, health plans, integrated delivery systems, purchasers, and others an accessible mechanism for obtaining objective, detailed information on clinical practice guidelines and to further their dissemination, implementation, and use.
In “Toxicology: The learning curve,” reporter Dan Fagin reports on the hypothesis some researchers have regarding unexpected dose response effects, such as a dramatic effect from a small dose.
Looking for how much a particular procedure, test or service should cost? The Healthcare Bluebook offers consumers an opportunity to search for what the “fair price” of any healthcare service is based on their zip code. (If you don’t input a zip code, it provides the national average.) The people behind the site describe themselves as “a diverse team of clinicians, healthcare experts, strategists and technologists dedicated to transforming healthcare with transparency.” The site is helpful when you want to provide an approximate cost of a procedure, test or other service in a story.
All vaccines that the CDC recommends are included in the Vaccines for Children program, and these prices are listed on the CDC website. The price list includes a separate chart for vaccines recommended for adults and for flu vaccines. More importantly for reporters, the charts also provide private sector vaccine prices as well.
Grants awarded by the federal department of Health and Human Services can be searched using the TAGGS tool, for Tracking Accountability in Government Grants. You can search for a topic or by state, institution and the name of the investigator.
The National Institutes of Health has a grant searching tool called RePORTER (Research Online Grant Reporting Tools). In addition to the keyword search you can search by funding category, location, and the names of investigators.
Guides to reporting
The difference between science journalism and science communication may seem so subtle as not to be important at first blush, at least to a layperson reader. The difference between the two is crucial, however: one requires a journalist to reports all facts and relevant perspectives on an issue without bias toward the actors involved. The other — science communication — is aimed at communicating science and possibly even science advocacy, often without concern about the people behind the science. In an excellent essay at the Guardian, science journalist Brooke Borel explains the difference using a recent example of conflicts of interest in GMO research. A similar essay by Bianca Nogrady explores the same issue.
“False balance” when covering controversial medical studies: This Columbia Journalism Review article, Sticking with the truth: How ‘balanced’ coverage helped sustain the bogus claim that childhood vaccines can cause autism, is a case study about how misunderstanding what “objective” coverage really entails, It can contribute to public misinformation and misconceptions when the reality is that the evidence for one side of an issue is overwhelmingly greater than contradictory evidence. By always seeking “both sides,” and giving them equivalent weight, article can result in a misleading “false balance.”
Uncertainty is a way of life for scientists, but readers and even journalists are usually less comfortable with it. Sense about Science provides a guide on Making Sense of Uncertainty, which covers how scientists express their degree of confidence about results, how uncertainty can undermine valid evidence and how policymakers and stakeholders make decisions in spite of uncertainty.
Though highly technical and not for the layperson, this journal article, Hazard Ratio in Clinical Trials, offers a deep dive into how hazard ratios are frequently misunderstood or misused, what they really mean, and what clinicians need to understand about them. This link is most helpful to those with a research or statistical background or for a reporter writing about medical studies for trade publications aimed at clinicians.
"How to Interpret Figures in Reports of Clinical Trials" Looking at a graph you don't understand? This article from the BMJ explains the four most common types of graphs in medical studies: flow diagrams, Kaplan-Meier plots, forest plots, and repeated measure plots. The full text of BMJ articles is available for free to reporters who register for media access.
Although written specifically for early career researchers, the Sense About Science guide “Peer review: the nuts and bolts” (pdf here) also gives an in-depth walk-through of peer review for journalists. Seeing what advice is given to early career researchers about the process can also provide reporters with insights about the goals and processes of peer review.
With a primer on the peer review process, including a short guide overview to peer review, this page also provides a number of other resources from Sense About Science to help journalists wrap their heads around peer review and the issues associated with the process and how it relates to open source publishing.
This Nature profile of Jeffrey Beall, an academic librarian and researcher at the University of Colorado in Denver, provides a good introduction to predatory journals and Beall’s list, one resource — albeit not unbiased — of potentially predatory journals.
Making sense of screening: Understanding the difference between screening and diagnostic tests and the risks and benefits of screening tests is essential to reporting on them accurately. This page contains a link to a complete guide on screening tests from Sense About Science as well as the basics nicely presented in this slideshow (PDF).
Varieties of bias to guard against: This PDF from MedicalBiostatistics.com gives an extensive overview of 32 different types of bias that can occur in medical research publishing. It is impossible to design a study that contains no bias at all, but there are ways to minimize bias, which this document discusses as well.
Bias in randomized controlled trials: This is a sample chapter from a book which explains the types of bias that can specifically occur in randomized controlled trials. It is a little long, but it’s written in layperson terms with clear subtitles and sections that make it highly readable and accessible.
Validity, reliability, and generalizability in qualitative research: To better understand generalizability in a study as well as how to assess the reliability and validity of study findings, this article, “Validity, reliability, and generalizability in qualitative research,” briefly discusses five published studies to illustrate how each of these concepts applies.
To better understand p-hacking, this Nature article dives into the possible statistical errors in research.
While knowing the five basic steps to a systematic review is helpful, this more in-depth article goes into detail on each of the steps.
The eight stages of systematic reviews and meta-analysis (done together) are outlined in detail in this article from the Journal of the Canadian Academy of Child and Adolescent Psychiatry.
Sensitivity and specificity can be challenging to understand, and this article clearly describes the differences between them and how they relate to false positives, false negatives, positive predictive value and negative predictive value. It also walks you through concrete examples.
“Explaining Odds Ratios” explains what odds ratios are and what the mathematical formula is for them, including several illustrative examples.
Don't understand the difference between incidence, prevalence, rates, ratios or other measures of disease occurrence? Check out this helpful cheat sheet from the University of Ottawa in Canada.
Statistics Glossary is an easy-to-use cheat sheet to help you remember what important statistical concepts mean, from the Dartmouth Institute for Health Policy and Clinical Practice.
Compendium of Primers is a collection of articles understanding statistics and statistical methods in medical research. It was originally published by now-defunct journal Effective Clinical Practice, a publication of the American College of Physicians.
The Cochrane Collaboration has put together this entertaining tutorial about P values and statistics.
Berkeley Initiative for Transparency in the Social Sciences (BITSS): An effort to promote transparency in empirical social science research. The program is fostering an active network of social science researchers and institutions committed to strengthening scientific integrity in economics, political science, behavioral science, and related disciplines. Central to BITSS effort is the identification of useful strategies and tools for maintaining research transparency, including the use of study registries, pre-analysis plans, data sharing, and replication.
SearchMedica: This search engine scans journals, systematic reviews, and evidence-based articles that are written and edited for clinicians practicing in primary care and all major specialties. It also selects and scans patient-directed web sites, online CME courses, and government databases of clinical trials and practice guidelines.
Examine.com: Independent analysis on nutrition and supplement studies from research scientists, public health professionals and nutritionists.