Just as ChatGPT allows people to “chat” with its large language model or make requests, Stanford Health Care has developed its own artificial intelligence-backed software called ChatEHR to allow clinicians to gain information from a patient’s medical records.
The technology, which is now in a pilot stage according to an institutional news release, permits health care providers to ask questions about a patient’s medical history. It also can automatically summarize charts and perform other tasks, using information from an individual’s health record to provide responses. The tool allows clinicians to ask for a summary of the entire chart or retrieve specific data points relevant to a patient’s care.
If it proves to be helpful — and accurate — it could help save health care providers time searching through extensive charts for test results or other pertinent background prior to conducting visits or testing. A 2020 study of 155,000 physicians and 100 million patient encounters found that doctors spent an average of 16 minutes and 14 seconds per patient encounter using electronic health records. This included chart review, documentation and ordering. The amount of time spent on chart review varied across specialty areas but overall was the most time-consuming task, taking up 5 minutes and 22 seconds per patient.
More about ChatEHR — and its potential
For now, ChatEHR is being used by just a small group of 33 physicians, nurses, physician assistants and nurse practitioners to monitor its performance, refine its accuracy and enhance its utility, the news release said.
When health providers access the tool, they are greeted with a prompt saying, “Hi, I’m ChatEHR! Here to help you securely chat with the patient’s medical record.” From there, they can enter any relevant questions about a patient, such as “Does this person have any allergies?” or “What does their latest cholesterol test show?”
“Making the electronic medical record more user-friendly means physicians can spend less time scouring every nook and cranny of it for the information they need,” said Sneha Jain, M.D., a clinical assistant professor of medicine at Stanford who has used the tool, in a prepared statement. “ChatEHR can help them get that information up front so they can spend time on what matters — talking to patients and figuring out what’s going on.”
But the tool potentially also could accelerate other time-consuming tasks such as helping ER doctors quickly get up to speed on relevant pieces of a patient’s medical history, including any medications, side effects, prior surgeries, etc. “It’s a ton of work to go back and find all of that information during a time-sensitive case, so speeding up that process would be a big help,” said Jonathan Chen, M.D., Ph.D., a hospital physician at Stanford, in the news release. He also thinks ChatEHR could help provide high-level summaries of large packets of information, sometimes hundreds of pages, that arrive with patients transferred to the hospital. This could start a patient’s medical treatment while the team reviews the information more in-depth later.
The team behind ChatEHR is also building out what they call “automations,” evaluative tasks based on a patient’s history. One example is an automation that can determine whether it’s appropriate to transfer a patient to another unit. Others may help determine a person’s eligibility for hospice care or recommend additional attention after surgery, according to the news release.
The software has been in development since 2023.
Other systems developing large language models
Other academic health systems are also developing large language models. For example, researchers at the University of Florida designed GatorTron, an AI natural language processing model intended to accelerate research and medical decision-making. A 2022 study published by the investigators described their model, based on more than 90 billion words of text from clinical notes from University of Florida Health, PubMed articles and Wikipedia, and demonstrated how well it performed in answering medical questions. Another study from 2023 found that patient notes written by doctors and those written using GatorTron were nearly identical.
Bioinformatics experts with the University of Texas Health Science Center at Houston also have published about their natural language model, called quEHRy. Their in-house testing found precision in providing an exact answer to a question was above 90%.
Elsewhere, experts at Children’s Hospital of Philadelphia developed an AI-powered virtual assistant named CHIPPER to support clinicians with various tasks in its electronic health record. White Plains Hospital in New York, part of the Montefiore Health System, is collaborating with Layer Health, an AI-enabled chart review company, to accelerate the chart review process, MobiHealthNews reported.
As these systems emerge, journalists can find interesting stories delving into how they work, their accuracy, how they’re being trained/tested, how much time they’re saving, what clinician users think of these tools, etc. One well-known flaw in ChatGPT and similar models is how it can “hallucinate” information, sometimes providing false or made-up material. It will be interesting to see how these program developers get around that issue to ensure patient safety.
Resources
- Clinicians can ‘chat’ with medical records through new AI software, ChatEHR – Stanford news release
- New ‘ChatEHR’ tool enables clinical conversation at Stanford – Healthcare IT News
- Stanford pilots ChatEHR – Becker’s Health IT
- A large language model for electronic health records – NPJ Digital Medicine study from University of Florida researchers
- Medical AI tool from UF, NVIDIA gets human thumbs-up in first study – University of Florida news release
- quEHRy: a question answering system to query electronic health records – JAMIA study from University of Texas Health Science Center researchers







