Evaluating medical evidence for journalists

Share:

Health Journalism 2012

Evaluating medical evidence for journalists

• Instructor: Ivan Oransky, M.D., executive editor, Reuters Health; blogger, Retraction Watch and Embargo Watch
• Instructor: Gary Schwitzer, publisher, HealthNewsReview.org

By Brandon Stahl

A 2009 New York Times article, “Drugs to deter some cancers are not taken,” included a reference to Tamoxifen, a drug its manufacturers claim in advertising reduces breast cancers by 48 percent.

But a closer look at the numbers tells a different story: The chance of getting breast cancer for women was actually only reduced by 1.7 percent by taking Tamoxifen over six years, compared to 3.3 percent for a placebo. If 100 women took Tamoxifen instead of a placebo, over six years it would result in only two cases of breast cancers. Meanwhile, the chances that women would incur a serious blood clot or get uterine cancer doubled while taking Tamoxifen.

That example was one of many used by journalists Gary Schwitzer and Ivan Oransky, M.D., to illustrate that the media too often exaggerate the benefits of drugs or medical therapies, while downplaying the harms during the panel “Evaluating medical evidence” at Health Journalism 2012.

“And there are always harms,” said Schwitzer, the publisher of HealthNewsReview.org, which has reviewed thousands of health stories for accuracy over the past several years.

Schwitzer said he’s found that 70 percent of those stories fail to adequately discuss the costs of treatments, 66 percent fail to quantify benefits, and 65 percent fail to quantify harms.

That’s an obvious disserve to readers.

“Seventy percent of the time we’re presenting a kid-in-the-candy store view of U.S. health care,” Schwitzer said. “Everything is terrific … there is no harm,”

Oransky, the executive editor of Reuters Health and a blogger for Retraction Watch and Embargo Watch, stressed that often articles that appear in medical journals appear to be wrong. Citing a Wall Street Journal graphic, the number of corrections in scientific journals jumped from about 20 in 2001 to nearly 350 in 2010.

Part of the advice he offered to reporters on how to better report on scientific journal studies: “Always read the study.” Failing to do that, he said, “is journalistic malpractice.”

“This should not be a controversial slide,” he said, “but apparently it is” referencing a debate he was witness to that ended up being written up by the Guardian.

He also said that reporters should determine if a study was peer-reviewed, what was the size, whether it was well-designed, and whether it was done on humans.

Reporters should also quantify the results of a study and cost, including the side effects, and not just whether “patients improved.”

Another tip from Oransky: Find a biostatistician to interpret a study’s results.

“They are lonely geeks,” Oransky said. “They want to talk with you.”

Schwitzer offered similar advice to improve reporting on medical evidence, including recognizing that even studies published in the prestigious New England Journal of Medicine does not guarantee they are true.

Both also cautioned against fear monger reporting that leads people to take unnecessary tests and screenings.

“All screening tests cause harm,” Schwitzer said. “Some may do good.”


 

Brandon Stahl is investigations editor at the Duluth (Minn.) News Tribune and was a 2012 AHCJ-Rural Health Journalism Fellow.

AHCJ Staff

Share:

Tags: