The misuse of artificial intelligence chatbots such as ChatGPT, Gemini and Copilot in health care is the most significant health technology hazard for 2026, according to the nonprofit patient safety organization ECRI. Every year, the organization compiles a list of the top 10 hazards based on responses to member surveys, literature reviews, medical device testing in their lab and investigations of patient safety incidents.
Journalists can download an executive brief of the report for more information and to spur story ideas. The report identifies what ECRI considers the greatest potential dangers and offers recommendations to reduce the risks of patient harm.
Concern over AI chatbots
The chatbots referenced earlier such as Gemini and Copilot are not specifically for health care applications, ECRI experts cautioned during a January webcast. “They’re not medical devices. They’re not FDA-approved and regulated for that purpose,” said Rob Schluth, a principal project officer of device safety at ECRI. However, because those tools are becoming integrated into our lives, “we’re finding that many people in health care or with health concerns are turning to tools like these for advice about medical conditions, or treatments, or other health care-adjacent questions, and that poses some risks.”
Besides looking up information on health conditions, clinicians may use the chatbots to identify potential treatment options for a patient or create notes. Hospital staff may use them to make purchasing decisions or for help writing reports, experts said.
It’s not that the chatbots themselves “have suddenly turned dangerous,” said ECRI’s president and CEO, Marcus Schabacker, M.D., Ph.D., but that when a chatbot’s output “feels helpful and definitive, people start to rely on it without necessarily questioning it.”
Large language models (LLMs) like these are designed to respond in a way that keeps users engaged, ECRI staff said during the webcast, and not challenge or correct flawed assumptions that may be input in queries. The chatbots also can make mistakes or fabricate or “hallucinate” information, as they have a bias to present information they consider a user wants to hear. They are also designed to sound definitive, and not to say, “I’m not sure” or “I’m sorry, I can’t help you with this,” Schabacker said.
A big misconception is that LLMs understand what they’re saying, said Christie Bergerson, Ph.D, a device safety analyst with ECRI. Instead, they predict the next word based on patterns and data they were trained on, she said. They identify words that typically occur in conversations about a given topic and form them into sentences, she said. Responses are based on predictions and statistical probabilities.
Chatbots still can be helpful for helping with brainstorming, getting background information, or explaining complex topics, the experts said, but users should verify information and “check in with a human expert before taking actions or making decisions based off an LLM’s response,” Bergerson said.
AHCJ has covered concerns related to the use of AI chatbots and mental health through blog posts and a webinar last fall.
Other health technology hazards
Rounding out the top 10 list are these issues that can impact patient safety:
2. Unpreparedness for a “digital darkness” event, or sudden loss of access to electronic systems and patient information.
Cyberattacks, natural disasters, vendor outages and internal system failures all could potentially paralyze a health care facility, the report said, potentially delaying treatment or jeopardizing patient safety. Health systems should strengthen disaster recovery planning, including establishing downtime procedures, having reliable data backup processes and ensuring readiness through training and safety drills, the authors wrote.
3. Substandard and falsified medical products
Counterfeit products are reaching U.S. markets “with alarming frequency,” authors wrote, and those that do function as intended can cause harm. They encouraged health care providers to strengthen their supply chains, demand high-quality products and implement measures to protect patients and staff from flawed products.
4. Recall communication failures for home diabetes management technologies
Continuous glucose monitors and other devices have improved quality of life for people with diabetes, but harm can result if product recalls and updates do not reach patients and caregivers in a timely manner. Home users of such technologies should be proactive in identifying and responding to safety notices about their devices and apps, the authors said, and providers and manufacturers should provide clear product safety information.
5. Misconnections of syringes or tubing to patient lines
Inappropriate connections of syringes or tubing to patient lines intended for different uses can lead to medications, solutions, IV nutrition or gas being introduced into the wrong line, with severe consequences. Report authors encouraged hospitals to adopt the use of safety connector devices.
6. Underutilizing medication safety technologies in perioperative settings
Medication errors can occur at several points before, during and after surgical procedures, the authors noted. The drugs administered are often opioids or other “high alert” medications. Health care organizations should incorporate tools like barcode medication administration systems, where health care workers scan a patient’s wristband and medication label to ensure they match.
7. Inadequate device cleaning instructions
Failure to properly clean and disinfect/sterilize reusable medical devices between uses can spread infection or lead to device damage or other harms, the authors said. Reprocessing can be made challenging by the wide variation in instructions provided by manufacturers. Health organizations should consider reprocessing instructions before placing orders.
8. Cybersecurity risks from legacy medical devices
Older software-based devices and systems no longer updated with sufficient cybersecurity protections provide an opening for hackers to exploit. Health systems might consider disconnecting such devices from their networks, using security tools to manage vulnerabilities or planning to replace the devices, the authors said.
9. Health technology implementations that prompt unsafe clinical workflows
Implementing health care technologies without users fully understanding how to use them can contribute to various patient harms, especially if users resort to unsafe workarounds. Health systems should conduct comprehensive workflow analyses before deploying new technology and institute comprehensive training programs, the authors said.
10. Poor water quality during instrument sterilization
Failure to maintain water quality during instrument disinfection/sterilization exposes patients to potentially infectious pathogens or can cause instruments to become corroded or spotted with residues. Health systems should routinely assess the cleanliness of processed devices and sample water quality.
Resources
- Top 10 Health Technology Hazards for 2026 Executive Brief – from ECRI
- The Misuse of AI Chatbots in Healthcare: Risks, Realities, and Responsible Innovation – an ECRI webcast
- 5 avenues to continue reporting on AI chatbots and mental health – an AHCJ blog post
- AI chatbots and mental health: How to report responsibly on a new risk – an AHCJ webinar










