Tag Archives: effectiveness

Know the nuances of vaccine efficacy when covering COVID-19 vaccine trials

it’s important to understand how vaccine efficacy is calculated, and other aspects of efficacy that can ensure your reporting on vaccine trial results is precise and accurateI’ve written in previous posts about what to look for in COVID-19 vaccine trials and red flags to monitor. The two most important outcomes in vaccine trials are the vaccine’s safety and its efficacy. Recall that efficacy is different from effectiveness: efficacy refers to how well the vaccine prevents infection in the clinical trial, with effectiveness referring to how well it prevents infection in the real world with a broader and more diverse population. Continue reading

Comparative Effectiveness Research Fellows named for 2019

Thirteen journalists have been selected for the 2019 class of the AHCJ Fellowship on Comparative Effectiveness Research. The fellowship program was created with support from the Patient-Centered Outcomes Research Institute (PCORI) to help reporters and editors produce more accurate, in-depth stories on medical research and how treatment decisions are made.

The fellows will gather in Washington, D.C., the week of Sept. 9 for four days of presentations, how-to database sessions and discussions with researchers.

Continue reading 

Comparative Effectiveness Research Fellows named for 2018

Eleven journalists have been chosen for the fourth class of the AHCJ Fellowship on Comparative Effectiveness Research. The fellowship program was created with support from the Patient-Centered Outcomes Research Institute to help reporters and editors produce more accurate in-depth stories on medical research and how medical decisions are made.

The fellows will gather in Washington, D.C., the week of Oct. 7 for a series of presentations, roundtables, how-to database sessions and interactions with researchers.

Continue reading 

CJR: Be skeptical of miraculous study results

In the Columbia Journalism Review, Katherine Bagley urges journalists to use caution when reporting the results of medical studies, citing reports on a recent study on the effectiveness of using stem cells to halt or even reverse multiple sclerosis as an example.

Done with caution and a critical eye, coverage of limited but promising research can provide a needed dose of optimism for people with MS and their families. Unfortunately, in this case, that journalistic prudence was almost totally missing.

Bagley said that, through over-the-top reporting and selective coverage of the small-scale control-free study had inspired false hope and misled readers.

Op-ed: Obama’s quality metrics could be dangerous

On the Wall Street Journal‘s Op-Ed page, Jerome Groopman and Pamela Hartzband cite the shortcomings of a quality metric-based system in Massachusetts and describe various misguided quality metrics. Groopman and Hartzband are both on the staff of Beth Israel Deaconess Medical Center in Boston and on the faculty of Harvard Medical School.

Initially, the quality improvement initiatives focused on patient safety and public-health measures. The hospital was seen as a large factory where systems needed to be standardized to prevent avoidable errors. A shocking degree of sloppiness existed with respect to hand washing, for example, and this largely has been remedied with implementation of standardized protocols. Similarly, the risk of infection when inserting an intravenous catheter has fallen sharply since doctors and nurses now abide by guidelines. Buoyed by these successes, governmental and private insurance regulators now have overreached. They’ve turned clinical guidelines for complex diseases into iron-clad rules, to deleterious effect.

Groopman and Hartzband cite several examples of regulations later proven questionable or even harmful, including the monitoring of ICU patients’ blood-sugar levels, the provision of statins to patients with kidney failure, and the monitoring of blood sugar in certain diabetics.

These and other recent examples show why rigid and punitive rules to broadly standardize care for all patients often break down. Human beings are not uniform in their biology. A disease with many effects on multiple organs, like diabetes, acts differently in different people. Medicine is an imperfect science, and its study is also imperfect. Information evolves and changes. Rather than rigidity, flexibility is appropriate in applying evidence from clinical trials. To that end, a good doctor exercises sound clinical judgment by consulting expert guidelines and assessing ongoing research, but then decides what is quality care for the individual patient. And what is best sometimes deviates from the norms.

Groopman and Hartzband cite studies showing that quality metrics had “had no relationship to the actual complications or clinical outcomes” of hip and knee replacement patients at 260 hospitals in 38 states and that, in 5,000 patients in 91 hospitals “the application of most federal quality process measures did not change mortality from heart failure.”

Sounds like it could be fodder for discussion at the “Medical effectiveness: Is there a NICE in U.S. future?” panel at Health Journalism 2009 on Saturday morning.