Tip Sheets
Tips to keep in mind when reporting ‘best of’ hospital ratings
By Liz Seegert
Reporting on hospital ratings — the “best of,” “top ten” and other rankings designed to help consumers with decision making are not necessarily all they’re cracked up to be. So much more goes in to these rankings than just the letter or number grade. Savvy reporters should pause and consider many angles before jumping in to proclaim that their local hospital is “best,” “worst” or somewhere in between.
Ratings certainly help with improving transparency and the patient’s right to know. However, it’s important that journalist know how to read between the lines and question the methodology and potential biases.
Here’s a tip sheet based on ideas presented at an event last month sponsored by AHCJ’s New York chapter. A panel moderated by ProPublica senior reporter Charles Ornstein featured Robert Panzer, M.D., chief quality officer at the University of Rochester Medical Center and a steering committee member for the Healthcare Association of New York State; Leah Binder, chief executive of the Leapfrog Group; and Marshall Allen, a reporter for ProPublica.
Comparisons can be complicated, panelists noted, so it helps to go back to reporting basics. Ask yourself:
Who is gathering the data and conducting the analysis?
-
Is it an independent non-profit, government entity, trade association, consumer or trade publication? Is the data provided by hospital administrators, patients, physicians, or from computer-generated claims?
What data is being gathered? What measures are being used?
-
Infection rates? Billing codes? Readmission rates?
-
It is mandated by national or state agencies?
-
Is it self-reported?
-
Is it process measures, clinical measures, outcome measures?
-
Remember that hospitals and providers may code things differently.
-
Are hospitals diligent about gathering all the data (such as infection rates)? Treat self-reported data with skepticism.
What is being compared?
-
The “best hospital” … for what? Be aware that facilities may excel in one area but have lousy ratings in another. Try to compare apples with apples among the scorecards.
When was the data generated?
-
Is it current? Is it a new metric mandated by the Centers for Medicare & Medicaid Services (CMS) or another entity?
-
How does the data compare with that of previous years?
Where was the data gathered?
-
Is this hospital-wide data or from a particular department?
-
Are the facilities being ranked only academic medical centers, community hospitals, or some combination?
Why was the data gathered?
-
Is this mandatory reporting for a state, a government agency, a survey for a consumer magazine?
-
Is it an effort to address past or current issues (for example, Joint Commission violations) or to track certain procedures or error rates?
-
What’s the motive for gathering the data?
How was the data gathered and how will it be analyzed?
-
Did clinicians have to enter data in a computer immediately following surgery or patient encounter?
-
Was this based on electronic medical records (EMRs)? Was gathered pre- or post-discharge?
-
Did it rely on subjective surveys?
-
What criteria were included and how will it be reported (such as line item lists, composite scores or another method)?
“It's never as black and white as anyone would wish,’ Marshall Allen reminded attendees. “Also, no one rating captures everything about the quality of care.”
Some ratings look only at specific surgical procedures at teaching hospitals, for example. Some only look at patient satisfaction scores, which often are subjective and may depend entirely on something such as what time dinner is served.
Allen suggested that, regardless of who publishes hospital ratings, reporters should:
-
Look at measures and methodology. How do they determine the quality of care provided by the hospital?
-
Consider the marketing of the organization. Do they present their own findings with integrity?
-
Do the ratings sponsors make pronouncements about the absolute nature of their measures that aren't really backed by the data itself?
-
Ask whether the measures are meaningful. Can the average consumer understand it? Often, the information released is difficult to grasp or to put into context.
-
Consider the purpose. Some publications’ business models include charging participating facilities tens of thousands of dollars annually to use the ratings in their marketing.
Why are all these ratings so different? If you line up various ratings side by side, Allen noted, none of them seem to agree. That’s because different organizations do things differently, drawing from different data sources and weighing metrics differently.
It’s tempting to apply a single measure to an entire facility, but even surgeons performing the same procedure in the same hospital can have quite different outcomes and readmission rates. So use discretion when reporting and follow the signals that the data provides primarily as a guide to a well-sourced story. The data usually isn’t the whole story.
Importantly, demand transparency from hospitals. Push them to share their own data and encourage them to be clear with the public.
Here are some ratings sites and other resources:
-
CMS: Hospital Compare datasets.
-
AHCJ’s Hospital Compare spreadsheets.
-
The Leapfrog Group: Hospital Comparisons.
-
Healthcare Association of New York State: Measures That Matter.
-
Consumer Reports: Doctors and Hospital Ratings.
-
U.S. News & World Report: Best Hospitals.
-
ProPublica: Surgeon Scorecard
-
AHCJ's HospitalInspections.org