Health Journalism Glossary

Effect size

  • Medical Studies

Journalists covering medical research write about effect size all the time but may not recognize that’s the name for it. Any time you compare relative risk, absolute risk, etc. between groups, you’re discussing the study’s effect size — the magnitude or extent of the relationship between two variables. How big is the difference between groups? Effect size measures the amount of difference an intervention makes (in randomized controlled trials/RCTs) or how strongly it’s associated with an exposure (in observational studies). Effect size is essential for reporting on medical research and for determining clinical significance/relevance.

Deeper dive
Despite the word “effect,” discussing effect size does not mean or imply that an intervention or exposure caused the outcome. While causation usually is involved in RCTs, you can discuss effect size in an observational study while also recognizing that associations in it are purely correlational. The most familiar effect sizes to journalists are likely relative risk, hazard ratio, odds ratio, risk ratio, absolute risk difference, number needed to treat, numbers needed to harm, mean difference and standard deviation. But other ways to measure effect size may be less familiar: Cohen’s d/standardized mean difference, Cohen’s f2, Cohen’s w, Cohen’s h, Cohen’s q, Eta-squared, Cramer’s V, Pearson r correlation, Spearman’s rho, Hedges’ g, R-squared, area under the curve, Glass’s delta, quartile (or similar) ranking, omega-squared… among many others.

The table on this explainer can help translate some of these, such as Cohen’s d, to lay-friendly estimations, but if you can’t find or provide absolute risk, you may need to consult a biostatistician or ask study authors to characterize the effect size and clinical significance for lay readers.

Share: