While age and race/ethnicity are nearly always reported in the baseline characteristics of Table 1 in a medical study, socioeconomic status is less common, as found in this 2012 study, “Participant demographics reported in “Table 1” of randomised controlled trials: a case of ‘inverse evidence’?” The study gives examples of proxy characteristics that can be used for socioeconomic status and discusses why it’s important to include these data in Table 1.
Understanding controlled trials: Crossover trials is a brief explainer from BMJ that reviews the basics on rationale and potential drawbacks of crossover trials.
As the title implies, “Fundamentals of clinical trial design” takes you all the way through the key elements of a clinical trial in accessible language that is detailed but not tedious.
This peer-review article, “A pragmatic view on pragmatic trials,” gives an overview of the differences between pragmatic trials and explanatory trials, including an explanation of what a pragmatic trial is, why a research would conduct one, what the advantages of them are and what their limitations are.
After a brief definition of pragmatic trials and their primary components, this page from the “Living Textbook of Pragmatic Clinical Trials” from the National Institutes of Health provides a chart listing pragmatic trials that are NIH Collaboratory Demonstration Projects.
Clinical Misinformation: The Case of Benadryl Causing Dementia: Both readers and journalists may sometimes forget to consider whether the findings of a study with a very specific population can be applied to other populations. In other words, how generalizable are the findings? This blog post from the NYU Langone Journal of Medicine, Clinical Correlations, provides an excellent case study on the dangers of extrapolating information from one study with a very narrow demographic patient population to others who do not share that population’s characteristics.
This brief description of the nocebo effect at Smithsonian Magazine links to a lot of the research on the phenomenon.
If you’re looking for a good overview of what scientific research looks like — the good, the bad and the ugly — “Science Isn’t Broken” by Christie Ashwanden is an excellent primer digging into P values and statistical significance, ways that bias affects research studies, the role of peer review, why papers are retracted and other key features of the scientific research ecosystem. The article is aimed at science research generally, but every bit of it applies to medical research. An interactive multimedia feature on P value and “p-hacking” helps make an abstract concept more immediately accessible as well.
Do Clinical Trials Work?
An op-ed by Clifton Leaf, author of The Truth in Small Doses: Why We’re Losing the Losing the War on Cancer – and How to Win It, published July 13, 2013, in The New York Times.
Bias and Spin in Reporting of Breast Cancer Trials
By Zosia Chustecka. Jan. 15, 2013 in Medscape Medical News
Survival of the Wrongest
An article about how personal-health journalism ignores the fundamental pitfalls baked into all scientific research and serves up a daily diet of unreliable information by David H. Freedman and published Jan. 2, 2013, in the Columbia Journalism Review.
“Lies, Damned Lies, and Medical Science”
A story about the work of Dr. John Ioannidis, from The Atlantic, November 2010
“The Perils of Bite-Sized Science”
Professors from the Universities of Liverpool and Bristol discuss the recent trend to shorten medical research studies.