Duo writes about how health statistics can mislead

Share:

Writing in mathematics-focused Plus Magazine, Mike Pearson (bio) and David Spiegelhalter (bio|wikipedia) examine not only the variety of methods used to report health statistics, but also just how each of those methods is employed to mislead physicians, patients and journalists alike. The piece was adapted from their Understanding Uncertainty Web site. The site, which is aimed in part at helping journalists understand statistics and probability, is profiled in this story.

The duo point out and illustrate common pitfalls and summarize relevant research. Not only do they point out fundamentals such as advantages that “number needed to treat,” and to a lesser extent absolute risk (1 in 100,000), numbers have over the popular relative risk (30 percent more likely), they also go much deeper. For example:

stats
Photo by Letting Go of Control via Flickr.

One of the most misleading, but rather common, tricks is to use relative risks when talking about the benefits of a treatment, for example to say that “Women taking tamoxifen had about 49% fewer diagnoses of breast cancer”, while potential harms are given in absolute risks: “The annual rate of uterine cancer in the tamoxifen arm was 30 per 10,000 compared to 8 per 10,000 in the placebo arm”. This tends to exaggerate the benefits, minimise the harms, and in any case make it hard to compare them. This way of presenting risk is known as mismatched framing, and was found in a third of studies published in the British Medical Journal.

And mixing and matching numbers isn’t the only way statistics can be misleading; the writers list many. Even the humble denominator can be manipulated.

For example, people have been offered a prize for drawing a red ball from a bag, and then given the choice of two bags: one containing 1 red ball and 9 white balls, the other containing 8 red balls and 92 white balls. The majority chose the bag with 8 red balls, presumably reflecting a view that it gave more opportunities to win, even though the chance of picking a red ball was lower for this bag. Similarly, people confronted with the statement “Cancer kills 2,414 people out of 10,000,” rated cancer as more risky than those told “Cancer kills 24.14 people out of 100”. The potential influence of the size of the numerator and denominator is known as the ratio bias. Frequencies are generally used in risk communication, but it is important to keep a common denominator in all comparisons.

For a thorough primer on statistics and health, the authors highly recommend Helping Doctors and Patients Make Sense of Health Statistics (pdf), an engaging 2008 paper that makes heavy use of examples and anecdotes to illustrate key issues in the interpretation of statistics.

That paper’s authors recommend the following best practices for writing about health statistics:

We recommend using frequency statements instead of single-event probabilities, absolute risks instead of relative risks, mortality rates instead of survival rates, and natural frequencies instead of conditional probabilities.

Also of interest is this related editorial (pdf) in which media are described as “enablers” of statistical illiteracy. The author also points out that, even if journalists communicate risk in the most objective possibly fashion, folks from different cultural backgrounds will still perceive it differently. It includes an interesting side note about the far-reaching impact of how physicians are allowed to define their own legal standard of care.

Related

AHCJ tip sheets

AHCJ articles