Can every hospital really be better than every other hospital at everything?
Hospital public relations folks and the people who produce rankings, such as Leapfrog and HealthGrades, would like us to think that’s the case.
But, as journalists, we need to take a critical look at the ever-increasing number of hospital rankings that land in our inboxes, said Marshall Allen and Olga Pierce, both of ProPublica.
The pair outlined tips we can use to decipher information during a workshop, “Making sense of hospital ratings: A guide for reporters,” at Health Journalism 2013 in Boston.
“New rankings come out all the time,” Pierce said. “The key is to develop a foundation to think critically about rankings.”
The two journalists talked about “anatomy” of hospital rankings, including:
- Data collection method
- Indexing/weighting/aggregating of data
- Some data counts more toward the rating.
- Letter grades, blobs, comparison to the average.
- Risk adjustment
They also explained types of measures, including:
- Process measures, such as the time to treat a heart attack patient.
- Structural measure, such as the number of nurses.
- Outcome measures, such as readmission and infection rates
- Patient satisfaction measures, such as was your room clean? Would you recommend this hospital to others?
Pierce and Allen warned that some measures are what they kindly called “silly.” That could include a number like 0.23 incidents of an item left in a patient after surgery.
“Remember, there are real people behind the measures,” Pierce said. “If you’re the person who left the hospital with a sponge left inside your body, that’s pretty important to you.”
Sometimes, data is used to obscure real issues, they said.
“Too much data makes it hard to know anything,” Allen said.
It’s important to know where the data comes from. Sources can include:
- Self reported – often not verified and influenced by various incentives. Some hospitals may refuse to provide it.
- Billing/claims data – often not verified, and may be inaccurate. But more complete and uniform.
- Clinical data – Not uniform. Expensive to standardize. Privacy limitations.
- Doctor or patient surveys – may not be well informed. Influenced by incentives and factors not related to quality of care – like physician referrals or friendly staff.
“Read the methodology and understand how the rating system is put together,” Allen said. “Then make sure your story does not oversimplify the limits of the methodology.”