Thirteen journalists have been selected for the 2019 class of the AHCJ Fellowship on Comparative Effectiveness Research. The fellowship program was created with support from the Patient-Centered Outcomes Research Institute (PCORI) to help reporters and editors produce more accurate, in-depth stories on medical research and how treatment decisions are made.
The fellows will gather in Washington, D.C., the week of Sept. 9 for four days of presentations, how-to database sessions and discussions with researchers.
Eleven journalists have been chosen for the fourth class of the AHCJ Fellowship on Comparative Effectiveness Research. The fellowship program was created with support from the Patient-Centered Outcomes Research Institute to help reporters and editors produce more accurate in-depth stories on medical research and how medical decisions are made.
The fellows will gather in Washington, D.C., the week of Oct. 7 for a series of presentations, roundtables, how-to database sessions and interactions with researchers.
In the Columbia Journalism Review, Katherine Bagley urges journalists to use caution when reporting the results of medical studies, citing reports on a recent study on the effectiveness of using stem cells to halt or even reverse multiple sclerosis as an example.
Done with caution and a critical eye, coverage of limited but promising research can provide a needed dose of optimism for people with MS and their families. Unfortunately, in this case, that journalistic prudence was almost totally missing.
Bagley said that, through over-the-top reporting and selective coverage of the small-scale control-free study had inspired false hope and misled readers.
On the Wall Street Journal‘s Op-Ed page, Jerome Groopman and Pamela Hartzband cite the shortcomings of a quality metric-based system in Massachusetts and describe various misguided quality metrics. Groopman and Hartzband are both on the staff of Beth Israel Deaconess Medical Center in Boston and on the faculty of Harvard Medical School.
Initially, the quality improvement initiatives focused on patient safety and public-health measures. The hospital was seen as a large factory where systems needed to be standardized to prevent avoidable errors. A shocking degree of sloppiness existed with respect to hand washing, for example, and this largely has been remedied with implementation of standardized protocols. Similarly, the risk of infection when inserting an intravenous catheter has fallen sharply since doctors and nurses now abide by guidelines. Buoyed by these successes, governmental and private insurance regulators now have overreached. They’ve turned clinical guidelines for complex diseases into iron-clad rules, to deleterious effect.
Groopman and Hartzband cite several examples of regulations later proven questionable or even harmful, including the monitoring of ICU patients’ blood-sugar levels, the provision of statins to patients with kidney failure, and the monitoring of blood sugar in certain diabetics.
These and other recent examples show why rigid and punitive rules to broadly standardize care for all patients often break down. Human beings are not uniform in their biology. A disease with many effects on multiple organs, like diabetes, acts differently in different people. Medicine is an imperfect science, and its study is also imperfect. Information evolves and changes. Rather than rigidity, flexibility is appropriate in applying evidence from clinical trials. To that end, a good doctor exercises sound clinical judgment by consulting expert guidelines and assessing ongoing research, but then decides what is quality care for the individual patient. And what is best sometimes deviates from the norms.
Groopman and Hartzband cite studies showing that quality metrics had “had no relationship to the actual complications or clinical outcomes” of hip and knee replacement patients at 260 hospitals in 38 states and that, in 5,000 patients in 91 hospitals “the application of most federal quality process measures did not change mortality from heart failure.”
Sounds like it could be fodder for discussion at the “Medical effectiveness: Is there a NICE in U.S. future?” panel at Health Journalism 2009 on Saturday morning.
In the Los Angeles Times, Noam N. Levey reviews the controversy that broke out when President Obama included money in the stimulus plan to study what medical treatments are most “cost effective.”
As Levey writes, “Many healthcare authorities and policymakers have agreed for years that a better system for tracking how well drugs, medical devices and surgical procedures work could improve the care Americans receive and ultimately save billions of dollars.”
The fight over what Levey calls “a relatively obscure proposal” foreshadows arguments that health care reform advocates can expect to face as Obama moves forward with plans to overhaul the health care system.
“The comparative-effectiveness issue was supposed to help lay the groundwork for the broader reform effort. But it became a lightning rod for conservative commentators who labeled it a step toward socialized medicine, a line of attack that has doomed every health overhaul effort since World War II.”
Robert Pear reports in The New York Times that $1.1 billion of the $787 billion federal economic stimulus package will fund research into the relative effectiveness of drugs and other forms of medical treatment.
A council of federal employees will advise President Obama and Congress on the funding of studies that proponents hope will help bring down the soaring cost of health care. Advocates say the research will be used for reference purposes, and not to mandate certain treatments.
“The money will be immediately available to the Health and Human Services Department but can be spent over several years. Some money will be used for systematic reviews of published scientific studies, and some will be used for clinical trials making head-to-head comparisons of different treatments.”