Tag Archives: quality metrics

Dallas reporters use AHRQ data to measure patient safety

The Dallas Morning News continues its 19-month investigation into patient safety at UT Southwestern Medical Center and Parkland Memorial Hospital.

The project, “First, Do No Harm: An investigation of patient safety in Dallas hospitals,” is behind the website’s paywall but The Dallas Morning News has granted AHCJ members access. To find out how to access the stories, please click here and log in as an AHCJ member.

Among the latest reporting:

Dallas Morning News reporters Ryan McNeill and Daniel Lathrop took advantage of AHRQ’s Patient Safety Indicator (PSI) software, typically used internally by hospitals, to process 9 million publicly available patient records from Texas hospitals, all of which came from between

Parkland, the prominent local hospital that has earned scrutiny on numerous prior occasions, was just the most notable of a number of area hospitals that came up short (and generated headlines), but our interest lies more with the reporters’ investigative methodology as well as the path they’ve blazed for broader hospital quality reporting.

All their work was done in consultation with experts in the field, including academics, government officials and hospital administrators. An outside review indicated McNeill and Lathrop used the software properly, and their results were in line with a similar public analysis. But that’s not to say it was a simple process.

The newspaper spent six months analyzing nearly 9 million state hospital discharge records using Patient Safety Indicators, or PSI, software. This highly sophisticated system was designed for the federal government as a tool to measure potentially preventable complications among hospital patients.

The PSIs do not present a complete safety picture because they are based on administrative data — a summary of diagnoses, procedures and outcomes derived from patients’ medical charts, as opposed to a complete review of all medical records.

It’s not a perfect measure, but it’s one of the best available.

PSIs “reflect quality of care inside hospitals,” according to the Agency for Healthcare Research and Quality, a division of the U.S. Department of Health and Human Services. It released the PSI software in 2003 and periodically updates it, most recently in August. The News used that version for its final analysis.

The software analyzes the administrative data that nearly every hospital in Texas reports to the state. No patient-identifying information is included.

The results on 15 PSIs are statistically “risk-adjusted” because some hospitals treat a disproportionate share of unhealthy patients, who face a greater risk of potentially preventable complications. Rates from eight of the indicators are used to determine a hospital’s patient safety “composite score.”

The AHRQ has just started posting some PSI measures on Hospital Compare, and the Texas health department plans to follow suit in 2013, but reporters looking to get their hands on a broader swath of the data will still have to follow the Dallas duo’s do-it-yourself approach.

The reporters’ work drew criticism from the Texas Hospital Association, which said the methodology was “not intended for use in public reporting.” McNeill refutes its claims in a blog post. Daniel K. Podolsky, president of UT Southwestern Medical Center, also sent a letter criticizing the reporting. George Rodrigue, managing editor of The Dallas Morning News, published a point-by-point response to Podolsky’s letter.

Yale doc knows the right data is out there

Writing for Forbes Magazine, Matthew Herper profiles Harlan Krumholz, the pioneering quality of care researcher and Yale cardiologist. During his career, Krumholz, 52, has been at the leading edge of everything from Hospital Compare and angioplasty delivery times to recent headline grabbers like the Vioxx suit and the 2009 study on radiation exposure during routine scans. The highlight of the profile comes when Herper highlights Krumholz’ knack for picking out just the right metrics with which to hold feet to the fire.

Harlan M. Krumholz, M.D.

Harlan M. Krumholz, M.D.

By figuring out what to measure and how, he showed that even top hospitals were systematically underperforming, largely because no one was tracking the results.

Krumholz’s basic idea is that if you ask the right question and pick the right measurement, you can figure out a way to get the answer, often using billing records or existing databases. This frequently involves partnering with insurers or Medicare. He has a knack for focusing on performance metrics that hold hospitals accountable.

Mass. won’t post hospitals’ death rates

The Boston Globe‘s Liz Kowalczyk reports that, two years after it was first proposed by a consumer group, the Massachusetts Health Care Quality and Cost Council has decided it won’t publish hospital-wide mortality rates. The problem, it seems, is the lack of an accurate, universal method of computing such numbers.

Health and Human Services Secretary Dr. JudyAnn Bigby, who heads the group that made the decision, said current methodology for calculating hospital-wide mortality rates is so flawed that officials do not believe it would be useful to hospitals and patients and could harm public trust in government.

It appears, Kowalczyk writes, that general hospital mortality rates just aren’t “ready for prime time” quite yet.

The council convened an expert panel, which worked with researchers to evaluate software of four companies for measuring hospital mortality. The problem was that researchers came out with vastly different results when they used the various methodologies to calculate hospital mortality between 2004 and 2007 in Massachusetts, and they could not tell which company’s results — or if any — were accurate.

Op-ed: Obama’s quality metrics could be dangerous

On the Wall Street Journal‘s Op-Ed page, Jerome Groopman and Pamela Hartzband cite the shortcomings of a quality metric-based system in Massachusetts and describe various misguided quality metrics. Groopman and Hartzband are both on the staff of Beth Israel Deaconess Medical Center in Boston and on the faculty of Harvard Medical School.

Initially, the quality improvement initiatives focused on patient safety and public-health measures. The hospital was seen as a large factory where systems needed to be standardized to prevent avoidable errors. A shocking degree of sloppiness existed with respect to hand washing, for example, and this largely has been remedied with implementation of standardized protocols. Similarly, the risk of infection when inserting an intravenous catheter has fallen sharply since doctors and nurses now abide by guidelines. Buoyed by these successes, governmental and private insurance regulators now have overreached. They’ve turned clinical guidelines for complex diseases into iron-clad rules, to deleterious effect.

Groopman and Hartzband cite several examples of regulations later proven questionable or even harmful, including the monitoring of ICU patients’ blood-sugar levels, the provision of statins to patients with kidney failure, and the monitoring of blood sugar in certain diabetics.

These and other recent examples show why rigid and punitive rules to broadly standardize care for all patients often break down. Human beings are not uniform in their biology. A disease with many effects on multiple organs, like diabetes, acts differently in different people. Medicine is an imperfect science, and its study is also imperfect. Information evolves and changes. Rather than rigidity, flexibility is appropriate in applying evidence from clinical trials. To that end, a good doctor exercises sound clinical judgment by consulting expert guidelines and assessing ongoing research, but then decides what is quality care for the individual patient. And what is best sometimes deviates from the norms.

Groopman and Hartzband cite studies showing that quality metrics had “had no relationship to the actual complications or clinical outcomes” of hip and knee replacement patients at 260 hospitals in 38 states and that, in 5,000 patients in 91 hospitals “the application of most federal quality process measures did not change mortality from heart failure.”

Sounds like it could be fodder for discussion at the “Medical effectiveness: Is there a NICE in U.S. future?” panel at Health Journalism 2009 on Saturday morning.