More hospitals reporting quality data from EHRs but these data are not available by facility

Rebecca Vesely

About Rebecca Vesely

Rebecca Vesely is AHCJ's topic leader on health information technology and a freelance writer. She has written about health, science and medicine for AFP, the Bay Area News Group, Modern Healthcare, Wired, Scientific American online and many other news outlets.

Photo” PINKÉ via Flickr

The number of hospitals voluntarily submitting data on quality generated by electronic health records (EHRs) increased significantly over the past year, according to the Joint Commission, a leading health care facility accreditation organization.

However, these data are not publicly available by facility, according to the Joint Commission. This is unfortunate because the information offers another window into hospital quality. AHCJ has long advocated for the public release of the Joint Commission’s hospital accreditation surveys and complaint reports.

These data generated by EHRs are called electronic clinical quality measures (eCQMs). They are electronically gathered and transmitted to the federal government and to accreditation organizations and can help shed light on in-patient care quality at hospitals.

A total of 470 hospitals nationwide submitted eCQM data to the Joint Commission in 2016, a huge jump from the 34 hospitals that submitted this data in 2015, according to the Joint Commission’s annual report released in November. Submission of eCQM data was voluntary.

An estimated 2,000 hospitals are expected to submit eCQM data to the Joint Commission in 2017, according to the Chicago-based organization.

Hospitals also submit eCQM data to the Centers for Medicare and Medicaid Services (CMS). The agency has a resource page on the topic.

For Joint Commission eCQM reporting in 2016, hospitals could select and report performance on up to 23 eCQMs in eight measure sets. These included heart attack, stroke, perinatal and venous thromboembolism (VTE) care. The most frequently reported eCQM data was for emergency department care. Many of the eCQMs aligned with CMS performance measures.

The Joint Commission noted that performance rates on eCQMs tended to be lower than clinical measures calculated through chart review. A few reasons for this were cited: different measure specifications between the two reporting systems; data sources for eCQMs might be more limited because they are based on structured information coded into the EHR; and updates for eCQMs might not be aligned with reporting based on chart abstractions.

However, based on surveys of hospitals, the Joint Commission said there is increasing confidence among facilities in reporting accurate eCQM data.

One hospital responded to the survey that using electronic methods to measure quality was preferable because it frees up clinicians and quality managers to focus on improvement rather than data collection and validation, according to the Joint Commission. For more on eCQMs, go to the Joint Commission’s annual report and scroll down to the lower right section “eCQM Data Summary.”

With the caveat that extracting quality data from EHRs remains imperfect, greater reporting by hospitals of eCQM data is a good thing for journalists. We can always use more data sources to gauge hospital care quality. But making them available by facility and/or health system would increase their value to news organizations – and to the public.

Further resources

1 thought on “More hospitals reporting quality data from EHRs but these data are not available by facility

  1. Norman Bauman

    We’ve had meetings about this at the NYC chapter and national meetings, and we’ve had discussions on the AHCJ list.

    If you think of patients as consumers who should make informed decisions about purchasing health care in the free market, then you have to give them a way to evaluate the quality of different providers, the way Consumer Reports rates cars.

    Unfortunately, different evaluations give inconsistent results, and some of the evaluations give absurd results. This is a sign that we can’t use this data to reliably rate providers.

    There are at least 2 problems with this raw data.

    (1) The important outcome is the clinical outcome: how many people survive after 30 and 90 days, etc. But the most important factor in clinical outcomes is the patient population. Patients with worse heart, kidney and pulmonary function, patients who are older, and patients with lower income, will do worse in almost every kind of surgery. So the main task in these ratings is to correct for these factors. But that’s impossible. All we have are associational studies, and the original databases may not have the data you need anyway. That’s why we have randomized controlled trials.

    (2) Look at the empirical facts. The big paradox is that in many comparisons of hospitals, the better academic hospitals have worse scores. That’s because they monitor performance measures better, so they’re more likely to record that a patient’s anticoagulant control didn’t meet guidelines. And because they get the most difficult patients.

    For example, JAMA and JAMA Internal Medicine recently had articles about the Veterans Health Affairs system, which finally included its data in the Hospital Compare system. doi:10.1001/jama.2017.17667 doi:10.1001/jamainternmed.2017.0605 The VA hospitals did better in hard clinical outcomes, like mortality, from myocardial infarction, for example. But they did worse on “behavioral health” and “patient experience.” Some of these low ratings were meaningless. For example, one of the scores was whether they counseled patients on tobacco use at every encounter. The VA has a computer system which indicates whether tobacco users had been counseled already, and they did studies to evaluate effective treatments. It’s quick, cheap and easy for a doctor to have a “conversation” about tobacco at every patient encounter, and check the box, but unless you follow it up the way the VA does, it doesn’t affect smoking.

    Before I recommend a rating system to my readers, I would insist that the rating organization demonstrate with high-quality evidence that patients who use that rating system make better decisions and have better clinically significant outcomes. In other words, I want to see evidence that they’re valid for the purpose for which they’re intended. I haven’t seen any evidence like that. Has anyone?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.