In this era of “alternative facts,” everyone should read Sue Halpern’s piece, “They Have, Right Now, Another You,” published in the New York Review of Books in late December.
The piece, along with several recent studies on the accuracy of electronic health records, adds to the growing question over what types of data we can trust. And more important, how can we know the difference between bad and good data.
In her piece, Halpern tells us what she learned about what Facebook knows about her. It turns out, Facebook does not know much about Halpern, despite collecting 98 data points on each user.
Meanwhile, data keeps piling up. Halpern points out that people give up their data freely, “in drips and drops,” to Facebook, Google, Amazon, and, increasingly, in the health sphere, to wearable companies via fitness trackers and to ancestry websites through DNA collection kits.
How data is collected, stored and used all help to determine its value. Just as Facebook, it turned out, didn’t know very much about Halpern, two recent studies suggest that patient data can also be misleading or plain incorrect.
In one study published in JAMA Ophthalmology in late January, eye symptoms including blurred vision, glare, pain and discomfort were less likely to be documented in the patient EHR than in a patient questionnaire.
The study authors concluded that there was no one reason for the inconsistencies in symptom reporting (and, the study was limited to 162 patients at one academic medical center). It’s possible that the questionnaire may have provided recall prompts for symptoms that the office visit (and EHR data entry) did not. Incorporating patient questionnaires into the EHR through standard intake procedures could help improve accuracy, the authors suggested.
In an accompanying editorial, Dr. Christina Weng of the Baylor College of Medicine in Houston, asked an important question: “While this finding is worrisome in itself, it introduces a more macroscopic concern about EMR data integrity in general: can we trust what is written in our patients’ medical records?”
However, another new study suggests that we can trust most (but not all) of what is written in medical records. University of North Carolina researchers compared patient reports and medical record abstracts of co-morbidities in newly diagnosed patients with prostate cancer. Patient reports and medical records agreed in more than 90 percent of the 881 patients included, according to the study, published in JAMA Oncology on Feb. 17.
Some major health conditions, such as arthritis, coronary artery disease and arrhythmia, had lower agreement between the patient report and medical record, according to the study.
When reporting on accuracy in patient data, reporters should keep in mind the following questions:
- What types of patient data are integrated into the EHR?
- Are there any data types (patient questionnaires, etc.) that are stored separately from the EHR that might contribute to the overall picture of the patient’s health?
- Has the hospital or health system conducted any internal studies on the integrity and/or accuracy of the data they are collecting on patients?
- How is this data being used in comparative effectiveness research, if at all?
The more we know about data collection methods and internal tests of data accuracy, the better we can evaluate the value of that data.