Tag Archives: risk

Don’t fudge the facts on chocolate studies

Brenda Goodman

About Brenda Goodman

Brenda Goodman (@GoodmanBrenda), an Atlanta-based freelancer, is AHCJ’s topic leader on medical studies, curating related material at healthjournalism.org. She welcomes questions and suggestions on medical study resources and tip sheets at brenda@healthjournalism.org.

Studies that support a link between chocolate and good health are popular with readers. But the reality is that most chocolate studies are observational in nature and are therefore limited in what they can tell us about its supposed benefits.

Photo by “Nikita!” at Flickr.com.

Skilled health reporters use these kinds of studies as opportunities to gently educate readers about the limits of observational research, such as confounders. Read on for tips on recognizing confounding, factors that confuse or obscure the association between a primary exposure of interest and an outcome, and explaining it to your readers, viewers and listeners.
Here’s a recent case in point—a study published in Neurology called “Chocolate Consumption and Risk of Stroke.”

The study followed 37,000 men in Sweden for 10 years. That’s a plus. Bigger numbers and longer follow-up usually mean more reliable results.

At the start of the study, researchers asked the men to recall how much chocolate they’d eaten in the previous year. That’s the first problem with the study. People are bad at remembering what they eat. It’s called self-report bias. There’s a concise and well-sourced discussion of this major flaw in nutrition studies at the website Unite for Sight.

Based on that one measure, the men were divided into four groups that ranged from those who reported eating no chocolate to those who ate the most, about 63 grams a week. That’s about the size of one-and-a-half regular Hershey bars. They used hospital discharge records to confirm stokes. That was also a strength of the study. Because the Swedish health system keeps extensive medical records on its citizens, it’s unlikely that many strokes went uncounted.

Brenda GoodmanBrenda Goodman, AHCJ’s topic leader on medical studies, is writing blog posts, editing tip sheets and articles and gathering resources to help our members cover medical research.

If you have questions or suggestions for future resources on the topic, please send them to brenda@healthjournalism.org.

Over the next 10 years, 1,995 men – about 5 percent of the study population – had a first stroke. The men who ate the most chocolate had a slightly lower risk of stroke compared to men who said they didn’t eat chocolate. That would all be pretty suggestive, except that a penchant for indulging a chocolate craving wasn’t the only way the men differed. The biggest chocolate consumers were also younger, more educated, less likely to smoke or have high blood pressure. They also reported eating more red meat, drinking more wine, and eating more fruits and vegetables. With the exception of eating a lot of red meat, those other things also reduce stroke risk. Those are called confounders because they confuse the effect of the exposure (chocolate) on the outcome (stroke).

Researchers adjusted their data to try to remove the influence of those factors. Adjustment is an important step, but it’s not perfect, as Gary Taubes explains in a post for Discover magazine’s Crux blog.

Here’s the table where researchers reported the adjustment (click to view full size):

Here’s a tip about confounding. Look at relative risks (RR) before (green arrow) and after (red arrow) adjustment for confounders for the total stroke numbers. If there’s little confounding in a study, those numbers should be very nearly the same. If they are different, that’s a good indication that confounding is a problem and that there are other factors influencing risk that weren’t measured. For an effect size that’s already small to begin with — just a 23 percent reduced relative risk – these relative risks shrink even more, to 17 percent for the men who ate the most chocolate. Keep in mind, that’s just the difference in relative risk. If 5 percent of the men had a stroke over the course of the study, a 17 percent reduction in that risk would reduce the absolute risk for having a stroke by about .8 percent.

Perhaps recognizing the limits of their own study, researchers took the work a step further and conducted a meta-analysis, a study of studies.  Meta-analyses are powerful tools for establishing the weight of evidence. But as Gary Schwitzer points out in AHCJ’s slim guide, “Covering Medical Research,” the quality of a meta-analysis depends on the quality of studies it includes. Two of the five studies included in the meta-analysis found associations between chocolate and stroke risk that were non-significant. All measured chocolate consumption by asking study participants to remember how much chocolate they ate and how often they ate it. There’s that self-report bias again.

When all the results were pooled, chocolate appeared to reduce the relative risk of stroke by 19 percent.  Reductions in absolute risk were not reported, but they probably look a lot less dramatic.

That’s a lot to try to explain to readers who are short on time and attention.  Here’s how Amy Norton handled it in her story for Reuters Health:

It starts with a deft lede:

Men who regularly indulge their taste for chocolate may have a somewhat decreased risk of suffering a stroke, according to a study out Wednesday.

That’s very different from saying that chocolate may reduce or could lower a man’s risk of having a stroke. The vast majority of stories used that approach, and while it’s not technically inaccurate, words like reduce and lower imply that chocolate is causing the stroke reduction, which is something the study simply doesn’t have the power to prove.

Norton goes on to give the study some context but also to explain why readers shouldn’t go on an immediate chocolate binge (at least not to prevent a stroke):

The study, published in the journal Neurology, is hardly the first to link chocolate to cardiovascular benefits. Several have suggested that chocolate fans have lower rates of certain risks for heart disease and stroke, like high blood pressure.

But those studies do not prove that chocolate is the reason. And the new one, funded by the Swedish Council for working Life and Social Research and the Swedish Research Council, doesn’t either, according to a neurologist not involved in the study.

Other things to like about this story include the use of independent researchers for comment, a discussion about flavonoids in chocolate and why scientists think they may benefit health, and numbers that reflect absolute as well as relative risks.

NEJM article: Media partially to blame for slow adoption of cost-effective health care

Pia Christensen

About Pia Christensen

Pia Christensen (@AHCJ_Pia) is the managing editor/online services for AHCJ. She manages the content and development of healthjournalism.org, coordinates social media efforts of AHCJ and assists with the editing and production of association guides, programs and newsletters.

In a new “Perspectives” piece in the New England Journal of Medicine, Victor R. Fuchs, Ph.D., and Arnold Milstein, M.D., M.P.H., examine why cost-effective health care has been slow to catch on in the United States.

They point to a number of factors, including insurance companies’ desire to protect profits, large employers that don’t want to alienate employees, legislators who collect campaign contributions from the health industry, hospital administrators protecting their revenue, doctors who are generally resistant to change, manufacturers that fear losing market share and more.

The authors also point blame at the media, saying it doesn’t adequately explain who really pays for health care:

Great harm is done when employment-based insurance is discussed as if it were a gift from “generous” employers rather than an alternative to wage increases.

They also mention a topic that is surely familiar to Covering Health readers: relative risk vs. absolute benefit.

The media also mislead the public by emphasizing the relative benefit of clinical interventions (“reducing risk of death by one third”) when the absolute benefit (“reducing risk from 0.03 to 0.02”) is usually more relevant.

“Misleading headlines, designed to attract larger audiences,” also get a share of the blame.

What do you think? Does media coverage have an effect on how cost-effective care is accepted? If so, do you have suggestions on what reporters could do differently?

Coverage of bacon, cell phones doesn’t add up

Pia Christensen

About Pia Christensen

Pia Christensen (@AHCJ_Pia) is the managing editor/online services for AHCJ. She manages the content and development of healthjournalism.org, coordinates social media efforts of AHCJ and assists with the editing and production of association guides, programs and newsletters.

Zoe Williams, a columnist for the UK’s Guardian, weighs in on coverage of two recent studies, saying that the Daily Mail “takes the role of the Friend Who Exaggerates.”

Williams takes issue with how the paper reported a study about cellular phones and the risk of brain cancer and how it reported on a Harvard study about bacon and heart disease.


Photo by GoodNCrazy via Flickr

She points out that, in the cell phone study, “there were 10 usage groups, ranging from very low to very high. In the very highest group – those reporting using their phone for 12 or more hours a day – there was a raised chance of both glioma and meningioma.” Williams and the study’s author agree that level of use is improbable.

While other papers reported that the study did not find a statistically significant increase in risk, The Daily Mail ran the story with a headline that says “Long conversations on mobile phones can increase risk of cancer, suggests 10-year study.” She also notes that a dose response is missing and that there is no evidence that radio waves – emitted by cell phones – cause cancer.

In the story about bacon and heart disease – which runs with a headline that declares “A bacon sandwich a day raises risk of heart disease by half” – she notes the Daily Mail reports “the risk of heart disease goes up by 42% with every two-ounce (about 56g) serving of processed meat.” Williams calls the 42 percent increase as small in epidemiological terms. The Daily Mail also doesn’t tell readers if that is the absolute risk or the relative risk.

Williams, who calls such stories “thrill-tainment,” concludes that “Health journalism (and it’s not just the Mail) needs more scientific credibility, even to function as entertainment.”

Tanning beds: What do the numbers really mean?

Pia Christensen

About Pia Christensen

Pia Christensen (@AHCJ_Pia) is the managing editor/online services for AHCJ. She manages the content and development of healthjournalism.org, coordinates social media efforts of AHCJ and assists with the editing and production of association guides, programs and newsletters.

This is a guest post from Ivan Oransky, M.D., editor of Reuters Health and AHCJ’s treasurer, written at my invitation.

May has been declared “Melanoma Awareness Month” or “Skin Cancer Awareness Month” – depending on which group is pitching you – and reporters are doubtlessly receiving press releases and announcements from a number of groups, including the Melanoma Research Foundation, the Skin Cancer Foundation, hospitals, doctors and other organizations.

Those press releases often point to the World Health Organization, which reports that “use of sunbeds before the age of 35 is associated with a 75% increase in the risk of melanoma” – a statistic often repeated in news stories about tanning beds.

tanning-bedPhoto by Whatsername? via Flickr

But what does that really mean? Is it 75 percent greater than an already-high risk, or a tiny one? If you read the FDA’s “Indoor Tanning: The Risks of Ultraviolet Rays,” or a number of other documents from the WHO and skin cancer foundations, you won’t find your actual risk.

That led AHCJ member Hiran Ratnayake to look into the issue in March for The (Wilmington, Del.) News Journal, after Delaware passed laws limiting teens’ access to tanning salons. The 75 percent figure is based on a review of a number of studies, Ratnayake learned. The strongest such study was one that followed more than 100,000 women over eight years.

But as Ratnayake noted, that study “found that less than three-tenths of 1 percent who tanned frequently developed melanoma while less than two-tenths of 1 percent who didn’t tan developed melanoma.” That’s actually about a 55 percent increase, but when the study was pooled with others, the average was a 75 percent increase. In other words, even if the risk of melanoma was 75 percent greater than two-tenths of one percent, rather than 55 percent greater, it would still be far below one percent.

For some perspective on those numbers, Ratnayake interviewed Lisa Schwartz, M.D.,M.S., whose work on statistical problems in studies and media reports is probably familiar to many AHCJ members. “Melanoma is pretty rare and almost all the time, the way to make it look scarier is to present the relative change, the 75 percent increase, rather than to point out that it is still really rare,” Schwartz, a general internist at Veterans Affairs Medical Center in White River Junction, Vt., told him.

In a nutshell, the difference between skin doctors’ point of view and Schwartz’s is the difference between relative risk and absolute risk. Absolute risk just tells you the chance of something happening, while relative risk tells you how that risk compares to another risk, as a ratio. If a risk doubles, for example, that’s a relative risk of 2, or 200 percent. If it halves, it’s .5, or 50 percent. Generally, when you’re dealing with small absolute risks, as we are with melanoma, the relative risk differences will seem much greater than the absolute risk differences. You can see how if someone is lobbying to ban something – or, in the case of a new drug, trying to show a dramatic effect –  they would probably want to use the relative risk.

This is not an argument for or against tanning beds. It’s an argument for clear explanations of the data behind policy decisions. For some people, the cosmetic benefits of tanning beds – and the benefit of vitamin D, for which there are, of course, other sources – might be worth a tiny increase in the risk of melanoma. For others, any increased risk of skin cancer is unacceptable. (And of course, for the tanning industry, the benefits can be measured in other ways – dollars.) But if reporters leave things at “a 75 percent increase,” you’re not giving your readers the most important information they need to judge for themselves.

So when you read a study that says something doubles the risk of some terrible disease, ask: Doubles from what to what?

Related

These numbers also might come up in reporting about the health reform bill as it does in “Indoor Tanning Getting Moment in the Sun” (March 26, 2010). From the story:

Over the past decade, indoor tanning has increasingly been likened to other maligned habits, cigarette smoking in particular.

And with the passage of the new health care bill, government officials are prepared to take that comparison one step further. A 10 percent tax could be levied on indoor tanning as early as July, in an effort to offset some of the health care bill’s multi-billion-dollar budget.

AHCJ resources on writing about medical studies:

In addition, look for a slim guide about covering medical studies that AHCJ will publish this summer.

Duo writes about how health statistics can mislead

Andrew Van Dam

About Andrew Van Dam

Andrew Van Dam of The Wall Street Journal previously worked at the AHCJ offices while earning his master’s degree at the Missouri School of Journalism, and he has blogged for Covering Health ever since.

Writing in mathematics-focused Plus Magazine, Mike Pearson (bio) and David Spiegelhalter (bio|wikipedia) examine not only the variety of methods used to report health statistics, but also just how each of those methods is employed to mislead physicians, patients and journalists alike. The piece was adapted from their Understanding Uncertainty Web site. The site, which is aimed in part at helping journalists understand statistics and probability, is profiled in this story.

The duo point out and illustrate common pitfalls and summarize relevant research. Not only do they point out fundamentals such as advantages that “number needed to treat,” and to a lesser extent absolute risk (1 in 100,000), numbers have over the popular relative risk (30 percent more likely), they also go much deeper. For example:

stats
Photo by Letting Go of Control via Flickr.

One of the most misleading, but rather common, tricks is to use relative risks when talking about the benefits of a treatment, for example to say that “Women taking tamoxifen had about 49% fewer diagnoses of breast cancer”, while potential harms are given in absolute risks: “The annual rate of uterine cancer in the tamoxifen arm was 30 per 10,000 compared to 8 per 10,000 in the placebo arm”. This tends to exaggerate the benefits, minimise the harms, and in any case make it hard to compare them. This way of presenting risk is known as mismatched framing, and was found in a third of studies published in the British Medical Journal.

And mixing and matching numbers isn’t the only way statistics can be misleading; the writers list many. Even the humble denominator can be manipulated.

For example, people have been offered a prize for drawing a red ball from a bag, and then given the choice of two bags: one containing 1 red ball and 9 white balls, the other containing 8 red balls and 92 white balls. The majority chose the bag with 8 red balls, presumably reflecting a view that it gave more opportunities to win, even though the chance of picking a red ball was lower for this bag. Similarly, people confronted with the statement “Cancer kills 2,414 people out of 10,000,” rated cancer as more risky than those told “Cancer kills 24.14 people out of 100″. The potential influence of the size of the numerator and denominator is known as the ratio bias. Frequencies are generally used in risk communication, but it is important to keep a common denominator in all comparisons.

For a thorough primer on statistics and health, the authors highly recommend Helping Doctors and Patients Make Sense of Health Statistics (pdf), an engaging 2008 paper that makes heavy use of examples and anecdotes to illustrate key issues in the interpretation of statistics.

That paper’s authors recommend the following best practices for writing about health statistics:

We recommend using frequency statements instead of single-event probabilities, absolute risks instead of relative risks, mortality rates instead of survival rates, and natural frequencies instead of conditional probabilities.

Also of interest is this related editorial (pdf) in which media are described as “enablers” of statistical illiteracy. The author also points out that, even if journalists communicate risk in the most objective possibly fashion, folks from different cultural backgrounds will still perceive it differently. It includes an interesting side note about the far-reaching impact of how physicians are allowed to define their own legal standard of care.

Related

AHCJ tip sheets

AHCJ articles