Plenty of pitfalls in reporting on medical studies #ahcj13

Paul Tullis

About Paul Tullis

Paul Tullis is an independent journalist in Los Angeles. He is attending Health Journalism 2013 on an AHCJ-California Health Journalism Fellowship, which is supported by The California HealthCare Foundation.

Seventy percent of news articles on medical studies fail to discuss costs of the treatment studies, quantify potential harms and benefits, and evaluate the quality of evidence, said Gary Schwitzer, publisher of Health News Review, which reviewed 1,800 such stories over the past seven years.

“Seventy percent of articles make things look terrific, risk-free and without a price tag,” Schwitzer said at a panel on the first day of Health Journalism 2013. “It strikes me that we can do a better job helping to educate patients, health consumers, news consumers and voters.”

Photo by Pia ChristensenIvan Oransky.

The criteria Schwitzer mentioned are among 10 that Health News Review presents as crucial elements to any article about medical studies.

HNR publishes “systematic, objective, criteria-driven reviews” of news articles and broadcast segments. Each is looked at by a journalist and someone with an advanced degree – a medical degree, doctorate of philosophy or a master’s in public health.

Another mistake that Schwitzer’s reviewers frequently cite is “idolatry of the surrogate marker.” This comes from not understanding, or not reporting, that surrogate outcomes, such as tumor shrinkage, do not always translate into meaningful outcomes, e.g. longer life. Others include failure to recognize that publication in a medical journal does not mean that the findings are important, or even true. Medical journals retract about 400 articles a year, said Ivan Oransky, M.D., executive editor of Reuters Health, co-founder of Retraction Watch and founder of Embargo Watch.

“Not all journals are created equal,” Oransky said. “If you’re a freelance writer, do you go to the worst-paying and least-visible outlet first? Of course not, and neither do scientists.”

Oransky’s talk was titled “How Not To Get It Wrong,” and he delivered a list of things reporters can do to avoid common mistakes. One is to always read the whole study, not just the abstract. Recognizing deadline pressures and the near-audible groans in the room, Oransky said that at Reuters, his reporters typically write two stories a day and when they report on studies, they always read the whole study and seek comments from experts. (Health News Review has a list of scientists who have averred that they have no conflict of interest and will offer quotes for reporters on study design and methodology.)

Only by reading the study, Oransky said, can a reporter learn whether a study was well-designed. Health News Review’s “hierarchy of evidence” places meta-analyses at the top, with randomized double-blind control trials below that, and cohort studies at number three. At the bottom are ideas, editorials and opinions.

(This reporter was reminded at this point of a quotation delivered to him by Dr. Robert Fox of the Cleveland Clinic, for a story written last year about patient-driven research in MS. “People without a scientific background,” he said, “often view all scientific papers with equal weight. Well, scientists don’t.”)

Another necessary question to ask is whether the study was on humans. “It’s remarkable there are any mice left with cancer, depression or restless leg syndrome,” Oransky wryly noted.

He also offered a couple of smart questions reporters can ask of study authors. “I try to impress sources that I’m going to ask them meaningful questions, and that I expect them to answer them, because as a reporter I’m skeptical but fair.” Do a “find” search in the paper for “power” to get the power calculation — a measure of whether the study was big enough to answer the question it’s researching. “If you don’t find one, ask,” Oransky said. “If they were unable to recruit all the subjects they wanted, why not?”

Other smart questions include, “Were those your primary endpoints?” (i.e., did the researchers not find what they were looking for and instead just publish some data they found interesting?) And “Is that endpoint clinically significant?” He gave the example of a reduction of blood pressure from 120/80 to 119/79 as something that’s statistically significant, but unlikely to reduce the risk of stroke or heart attack.

“Journals were never meant to be a source of daily news, but as a conversation between scientists,” Schwitzer said. “Yet that’s what they’ve become.”

“If you cover health, do you only cover studies and clinical news?” he concluded, “Ask yourself: How many of your stories are about treatments, tests, products and procedures? Do you think you might be reporting too much of this?” He pointed to a session at AHCJ 2013 on covering health delivery and health insurance to the unemployed and homeless as an example of other stories reporters on the health beat might cover, to perhaps greater public service. ‘Have you spoken to your editor about this? Can we help? Let us know.”

A 70-page slim guide is available free to AHCJ members and the association has a fast-growing core topic area with resources on covering medical research.

One thought on “Plenty of pitfalls in reporting on medical studies #ahcj13

Comments are closed.