What journalists should know about hospital ratings

Liz Seegert

Share:

PhotoJoel Dinda via Flickr
Photo: Joel Dinda via Flickr

Journalists should take hospital ratings with a healthy dose of skepticism, according to experts at a recent AHCJ New York chapter event. Simply looking at an institution’s overall rating is just the start. Reporting that without understanding what’s being rated and how “success” is measured does a disservice to your audience.

Ratings are far from perfect and are ever evolving. That leaves journalists in kind of a quandary, noted chapter president Trudy Lieberman. “What do we do about the ratings, how do we judge them, how do we use them in our stories and which ones should we use?”

Moderator Charles Ornstein, senior reporter at ProPublica, said he views hospital ratings like new chocolate studies. There’s never a long stretch between studies, and journalists never really know what to do with them.“You don’t want to confuse people, but you feel like you need to report on something.”

Hospital rating sites — such as Hospital Compare, U.S. News, Consumer Reports and the LeapFrog Group — get the kind of traffic that journalists can only dream of for their own stories. (Reporter Steve Findlay breaks down the numbers in this recent Health Affairs article.) The numbers are big and people are looking at them. But do they really understand what they’re looking at? And do these ratings really mean anything?

Yes, they matter, said ProPublica reporter Marshall Allen, one of the creators of the Surgeon Scorecard. He said that the medical community has failed miserably to provide the kind of transparency about quality and safety that consumers have a right to know. “It’s their life; they’re making life decisions based on this information.”

Allen discussed his work on patient safety and the complexity involved in developing appropriate rating systems. Ratings spur the medical community to address and improve whatever’s being rated, “but it’s never as black and white it as anyone would wish, and also, no one rating captures everything about the quality of care,” he said.

Journalists should use caution and ask questions about data sources and methodology, Allen advised. Data isn’t kept in standard ways, so the quality of data may depend on the state you’re in. Additionally, issues of accuracy and intensity of metrics like billing codes vary from institution to institution.

Elements that go into developing hospital ratings are diverse. They include how data is gathered and analyzed, how it’s weighted, how it’s presented, risk adjustment, whether it is process, structural, outcomes or clinical data – or some combination and whether subjective information, like patient satisfaction surveys, are incorporated. Allen pointed out that self-reported data can be misconstrued. And, what’s not reported can’t be rated, so a hospital that does not rigorously look for infections may fare much better than one that diligently and honestly reports this information to the CDC.

While there’s no such thing as “perfect” data, it’s much better than when he first began analyzing performance information 30 years ago, said Robert Panzer, M.D., chief quality officer of University of Rochester Medical Center and Strong Memorial Hospital. Panzer spoke in his role as a steering committee member for Health Association of New York State (HANYS).

He is not a fan of letter grades or “best” type ratings. “We get much more value internally out of measurement systems that are more specific.” In New York, for example, cardiac surgery gets tracked, as do infection complications.

“Those comparative numbers help push us to get better. We’re not so much focused on whether we’re good or bad, but how much of an opportunity do we see, how good are the best, how far are we from there and how much effort do we need to put into it to focus on what matters,” he explained.

Measurement overload

Many hospitals are suffering from what HANYS has called “Measure Madness.” Some hospitals track over 800 or 900 metrics to satisfy internal and external monitoring. That may have its benefits, but Panzer said it also distracts those on the front lines from knowing what to work on. And, it means less time to work on issues that need ongoing attention like readmissions or surgical complications.

These myriad measures are vital to transparency, according to Leah Binder, president and CEO of the Leapfrog Group. The organization collects data and reports on hospital performance with a focus on consumers and purchasers. It issues a hospital safety score, aggregated from its own performance surveys and data from the CDC, the Agency for Healthcare Research and Quality (AHRQ), the Center for Medicare and Medicaid Services (CMS) and the American Hospital Association.

“Transparency is critical to getting anything done,” she said. “But health care has a tradition of zero transparency. Zero.”

Making ratings public means that things can change, said Binder. Many hospitals claim they are transparent, but that’s not always true. New York is one of the least transparent states when it comes to hospital data. And nationally, she said progress has been “awful,” despite a call nearly 15 years ago by the Institute of Medicine in its report To Err is Human.

So journalists need to keep reporting on it, keep holding hospitals and other institutions accountable. She encouraged the audience to remember the value of what they’re doing and the contribution they make to that transparency. That, she said, is a fundamental shift from the way health care used to be conducted in the United States.

Related

Liz Seegert

Liz Seegert

Liz Seegert is AHCJ’s health beat leader for aging. She’s an award-winning, independent health journalist based in New York’s Hudson Valley, who writes about caregiving, dementia, access to care, nursing homes and policy. As AHCJ’s health beat leader for aging,