As the title implies, “Fundamentals of clinical trial design” takes you all the way through the key elements of a clinical trial in accessible language that is detailed but not tedious.
This peer-review article, “A pragmatic view on pragmatic trials,” gives an overview of the differences between pragmatic trials and explanatory trials, including an explanation of what a pragmatic trial is, why a research would conduct one, what the advantages of them are and what their limitations are.
Clinical Misinformation: The Case of Benadryl Causing Dementia: Both readers and journalists may sometimes forget to consider whether the findings of a study with a very specific population can be applied to other populations. In other words, how generalizable are the findings? This blog post from the NYU Langone Journal of Medicine, Clinical Correlations, provides an excellent case study on the dangers of extrapolating information from one study with a very narrow demographic patient population to others who do not share that population’s characteristics.
If you’re looking for a good overview of what scientific research looks like — the good, the bad and the ugly — “Science Isn’t Broken” by Christie Ashwanden is an excellent primer digging into P values and statistical significance, ways that bias affects research studies, the role of peer review, why papers are retracted and other key features of the scientific research ecosystem. The article is aimed at science research generally, but every bit of it applies to medical research. An interactive multimedia feature on P value and “p-hacking” helps make an abstract concept more immediately accessible as well.
Do Clinical Trials Work? An op-ed by Clifton Leaf, author of The Truth in Small Doses: Why We're Losing the Losing the War on Cancer – and How to Win It, published July 13, 2013, in The New York Times.
Survival of the Wrongest An article about how personal-health journalism ignores the fundamental pitfalls baked into all scientific research and serves up a daily diet of unreliable information by David H. Freedman and published Jan. 2, 2013, in the Columbia Journalism Review.
National Guideline Clearinghouse NGC was created by Agency for Healthcare Research and Quality (AHRQ) in partnership with the American Medical Association and the American Association of Health Plans (now America's Health Insurance Plans [AHIP]). Its mission is to provide health professionals, health care providers, health plans, integrated delivery systems, purchasers, and others an accessible mechanism for obtaining objective, detailed information on clinical practice guidelines and to further their dissemination, implementation, and use.
Comparative effectiveness research
Toolkit: Right Care, Right Patient, Right Time: Comparative Effectiveness Research in the U.S.: In 2019, the Alliance for Health Policy coordinated a series of programming on comparative effectiveness research (CER) and patient-centered outcomes research (PCOR). The authorization for the Patient-Centered Outcomes Research Institute (PCORI) is set to expire sometime this year. A decade after the creation of the institute, conversations around CER, health care value, patient-centered care, and real-world evidence continue. This Alliance toolkit seeks to ensure policymakers are informed about CER and its potential impact by providing the basics of CER, facts on PCORI, and links to additional resources.
This paper in the journal Multidisciplinary Healthcare, “Person-first language: are we practicing what we preach?” lays out statistics, background and considerations in the use of person-first versus identity-first language.
While this explainer from the FDA on off-label drugs is a helpful primer for journalists wanting to understand these drugs better, this article’s real value is in providing plain-language answers to consumers’ most common questions. If you’re writing a piece that addresses some of these same questions or writing about off-label drugs in general, you can link to this article or see examples of how to explain a concept in accessible language.
To better understand all the ways the FDA’s Emergency Use Authorization mechanism works and what is and is not allowed under, check out the agency’s comprehensive webpage on EUA. The page includes links to PDFs with much greater detail on regulatory policy, indications for use and related information, as well as a list of all products which have received EUAs.
Following funding of medical research
Looking for how much a particular procedure, test or service should cost? The Healthcare Bluebook offers consumers an opportunity to search for what the “fair price” of any healthcare service is based on their zip code. (If you don’t input a zip code, it provides the national average.) The people behind the site describe themselves as “a diverse team of clinicians, healthcare experts, strategists and technologists dedicated to transforming healthcare with transparency.” The site is helpful when you want to provide an approximate cost of a procedure, test or other service in a story.
All vaccines that the CDC recommends are included in the Vaccines for Children program, and these prices are listed on the CDC website. The price list includes a separate chart for vaccines recommended for adults and for flu vaccines. More importantly for reporters, the charts also provide private sector vaccine prices as well.
Grants awarded by the federal department of Health and Human Services can be searched using the TAGGS tool, for Tracking Accountability in Government Grants. You can search for a topic or by state, institution and the name of the investigator.
The National Institutes of Health has a grant searching tool called RePORTER (Research Online Grant Reporting Tools). In addition to the keyword search you can search by funding category, location, and the names of investigators.
Guides to reporting
Tips for interviewing people with disabilities: Covering medical studies often means interviewing people who live with conditions discussed in a study. If you’ll be meeting in person with someone who has a disability, the interview will go more smoothly and productively if you both feel comfortable. This tip sheet from the National Center on Disability and Journalism offers tips on what reporters should do or consider before and during the interview. Although the tips focus on in-person interviews, many of the suggestions could apply to phone interviews as well.
The difference between science journalism and science communication may seem so subtle as not to be important at first blush, at least to a layperson reader. The difference between the two is crucial, however: one requires a journalist to reports all facts and relevant perspectives on an issue without bias toward the actors involved. The other — science communication — is aimed at communicating science and possibly even science advocacy, often without concern about the people behind the science. In an excellent essay at the Guardian, science journalist Brooke Borel explains the difference using a recent example of conflicts of interest in GMO research. A similar essay by Bianca Nogrady explores the same issue.
“False balance” when covering controversial medical studies: This Columbia Journalism Review article, Sticking with the truth: How ‘balanced’ coverage helped sustain the bogus claim that childhood vaccines can cause autism, is a case study about how misunderstanding what “objective” coverage really entails, It can contribute to public misinformation and misconceptions when the reality is that the evidence for one side of an issue is overwhelmingly greater than contradictory evidence. By always seeking “both sides,” and giving them equivalent weight, article can result in a misleading “false balance.”
Uncertainty is a way of life for scientists, but readers and even journalists are usually less comfortable with it. Sense about Science provides a guide on Making Sense of Uncertainty, which covers how scientists express their degree of confidence about results, how uncertainty can undermine valid evidence and how policymakers and stakeholders make decisions in spite of uncertainty.
Though highly technical and not for the layperson, this journal article, Hazard Ratio in Clinical Trials, offers a deep dive into how hazard ratios are frequently misunderstood or misused, what they really mean, and what clinicians need to understand about them. This link is most helpful to those with a research or statistical background or for a reporter writing about medical studies for trade publications aimed at clinicians.
The webpage for the Advisory Committee on Immunization Practices (ACIP) provides information on upcoming meetings, specific vaccine working groups, a list of all committee members and the committee’s recommendations.
"How to Interpret Figures in Reports of Clinical Trials" Looking at a graph you don't understand? This article from the BMJ explains the four most common types of graphs in medical studies: flow diagrams, Kaplan-Meier plots, forest plots, and repeated measure plots. The full text of BMJ articles is available for free to reporters who register for media access.
Although written specifically for early career researchers, the Sense About Science guide “Peer review: the nuts and bolts” (pdf here) also gives an in-depth walk-through of peer review for journalists. Seeing what advice is given to early career researchers about the process can also provide reporters with insights about the goals and processes of peer review.
With a primer on the peer review process, including a short guide overview to peer review, this page also provides a number of other resources from Sense About Science to help journalists wrap their heads around peer review and the issues associated with the process and how it relates to open source publishing.
This Nature profile of Jeffrey Beall, an academic librarian and researcher at the University of Colorado in Denver, provides a good introduction to predatory journals and Beall’s list, one resource — albeit not unbiased — of potentially predatory journals.
This entry on priming at VeryWell gives a nice overview of the phenomenon as well as details on different types of priming that can occur in studies, particularly social science and nutrition studies.
The article “Risk-Adjusted Mortality: Problems and Possibilities,” though a bit dated (2012), offers a good discussion of the limitations of “ratio of observed-to-expected deaths” as a hospital quality measure, including concerns related to medical documentation and the severity of a patient’s condition.
Communicating Risk in a Soundbite — This guide, created by the Science Media Centre in the United Kingdom, is aimed at scientists, but it still has a lot to offer journalists since journalists often have to convey the same kind of information with an accurate sense of the risk and appropriate context.
Making sense of screening: Understanding the difference between screening and diagnostic tests and the risks and benefits of screening tests is essential to reporting on them accurately. This page contains a link to a complete guide on screening tests from Sense About Science as well as the basics nicely presented in this slideshow (PDF).
This comprehensive list from Poynter is divided into Research and Records, Writing About Sexual Abuse of Children, Advocacy Organizations, Comprehending Pedophilia, Resources from the Specialized Reporting Institute, Sex Offender Registries, A Perpetrator's Viewpoint and Books, along with a list of experts, Twitter tags, blogs and reporters’ articles.
Those who will be spending a lot of time reporting on sexual abuse of children may want to take the time to go through this 80-minute online seminar, Covering Child Sex Abuse: Lessons from the Sandusky Story, from Poynter. The course is taught by reporter Sara Ganim, who broke the first story on the sex abuse scandal involving Jerry Sandusky, the former Penn State University assistant football coach.
You can use this online ClinCalc sample size calculator to determine what sample size is needed in a study to achieve enough power for statistically significant findings. These are designed for researchers to use, but journalists can use them if they have questions about whether a study has enough participants for the findings to be meaningful.
While the concept of single- and double-blinding in studies seems pretty straightforward, it’s anything but in actual practice. In this article, “Blinding in pharmacological trials: the devil is in the details,” the authors discuss in depth some of the challenges and limitations to consider in designing drug trials with respect to blinding.
Detection bias is another word for surveillance bias, explained well here.
The University of Oxford’s catalog of bias entry on Selection Bias offers multiple examples in plain language and the potential pitfalls of this bias.
This is a plain-language summary of confounding by indication, from the University of Oxford’s Catalogue of Bias, provides helpful diagrams for visualizing how this bias type can interfere with interpreting the relationship between exposures and outcomes.
“Attrition bias in randomized controlled trials” not only provides a good overview of what attrition bias is and why it’s a problem, but it also describes ways that attrition bias can be overcome. In speaking with researchers whose studies had substantial attrition, you could ask them what they did to account for that.
If you’re struggling to understand how — or how much — attrition bias can affect study findings, especially in randomized controlled trials, “Reporting attrition in randomised controlled trials” explains the effects and includes a sample trial with sample results as an example.
Recall Bias can be a Threat to Retrospective and Prospective Research Designs Recall bias represents a major threat to the internal validity of studies using self-reported data. It arises with the tendency of subjects to report past events in a manner that is different between the two study groups. This pattern of recall errors can lead to differential misclassification of the related variable among study subjects with a subsequent distortion of measure of association in any direction from the null, depending on the magnitude and direction of the bias. Although recall bias has largely been viewed as a common concern in case-control studies, it also has been documented as an issue in some prospective cohort and randomized controlled trial designs.
Varieties of bias to guard against: This PDF from MedicalBiostatistics.com gives an extensive overview of 32 different types of bias that can occur in medical research publishing. It is impossible to design a study that contains no bias at all, but there are ways to minimize bias, which this document discusses as well.
Bias in randomized controlled trials: This is a sample chapter from a book which explains the types of bias that can specifically occur in randomized controlled trials. It is a little long, but it’s written in layperson terms with clear subtitles and sections that make it highly readable and accessible.
This incredible resource, “It's the Effect Size, Stupid” from University of Durham (UK) professor Robert Coe, is an excellent overview of effect size. It gets wonky, but even if you prefer to avoid the math and extra explanations, bookmark this link for the table that interprets effect sizes, allowing you to estimate a percentage effect (similar to relative risk) from Cohen’s d, for example. Also skim the Conclusions at the bottom.
The Psychiatry journal article “Estimating the Size of Treatment Effects” discusses five types of effect sizes: Cohen’s d (aka standardized mean difference), relative risk, odds ratio, number needed to treat and area under the curve.
This article “Interpretation of Cost-Effectiveness Analyses” from the Journal of General Internal Medicine looks at some of the challenges of doing cost-effective analyses and using QALYs, including the influence of different variables (such as funding sources).
This free article “Calculating QALYs, comparing QALY and DALY calculations” gets pretty technical with the math of how quality-adjusted life years (QALYs) and disability-adjusted life years (DALYs) are calculated, but for those wanting to know the nitty gritty details, it covers them well.
“Quality Adjusted Life Years,” a slideshow from Johns Hopkins Bloomberg School of Public Health, explains QALYs a bit more gradually and more accessibly.
Statistics How To is a blog and resource site that explains the basics on statistics and probability with an simple index of links and even a page of calculators where you can enter whatever raw data you have to see if you get the same results on a method the authors stated they used.
Each of these articles from Explorable and Statistics How To explain convenience sampling, why it might be used and its advantages and disadvantages.
To better understanding types of sampling in clinical trials, “Types of Samples” from University of California at Davis explains the difference between probability sampling and non-probability sampling as well as some examples of each.
Validity, reliability, and generalizability in qualitative research: To better understand generalizability in a study as well as how to assess the reliability and validity of study findings, this article, “Validity, reliability, and generalizability in qualitative research,” briefly discusses five published studies to illustrate how each of these concepts applies.
To better understand p-hacking, this Nature article dives into the possible statistical errors in research.
While knowing the five basic steps to a systematic review is helpful, this more in-depth article goes into detail on each of the steps.
The eight stages of systematic reviews and meta-analysis (done together) are outlined in detail in this article from the Journal of the Canadian Academy of Child and Adolescent Psychiatry.
Sensitivity and specificity can be challenging to understand, and this article clearly describes the differences between them and how they relate to false positives, false negatives, positive predictive value and negative predictive value. It also walks you through concrete examples.
“Explaining Odds Ratios” explains what odds ratios are and what the mathematical formula is for them, including several illustrative examples.
Don't understand the difference between incidence, prevalence, rates, ratios or other measures of disease occurrence? Check out this helpful cheat sheet from the University of Ottawa in Canada.
Statistics Glossary is an easy-to-use cheat sheet to help you remember what important statistical concepts mean, from the Dartmouth Institute for Health Policy and Clinical Practice.
Compendium of Primers is a collection of articles understanding statistics and statistical methods in medical research. It was originally published by now-defunct journal Effective Clinical Practice, a publication of the American College of Physicians.
The Cochrane Collaboration has put together this entertaining tutorial about P values and statistics.
Websites on statistics, studies and media coverage
World Health Organization: Understanding the organization, budget, funding and activities of the World Health Organization can put its role in terms of medical research into context. The Kaiser Family Foundation offers all that in this overview.
The National Institutes of Health launched the Science, Health, and Public Trust initiative to share strategies and best practices to help convey complex research results to the public in ways that are clear, credible, and accurate. The Perspectives section offers insights on biomedical communication from NIH experts. The Tools section provides useful aids including:
The National Center for Complementary and Integrative Health (NCCIH), part of the National Institutes of Health, has launched Know the Science, an initiative aiming to clarify and explain scientific topics related to health research. This effort features a variety of materials including interactive modules, quizzes, and videos to provide engaging, straightforward content
Berkeley Initiative for Transparency in the Social Sciences (BITSS): An effort to promote transparency in empirical social science research. The program is fostering an active network of social science researchers and institutions committed to strengthening scientific integrity in economics, political science, behavioral science, and related disciplines. Central to BITSS effort is the identification of useful strategies and tools for maintaining research transparency, including the use of study registries, pre-analysis plans, data sharing, and replication.
SearchMedica: This search engine scans journals, systematic reviews, and evidence-based articles that are written and edited for clinicians practicing in primary care and all major specialties. It also selects and scans patient-directed web sites, online CME courses, and government databases of clinical trials and practice guidelines.
Examine.com: Independent analysis on nutrition and supplement studies from research scientists, public health professionals and nutritionists.