Two recent national surveys about AI point to some striking trends in health care. The first, from Wolters Kluwer, found that 57% of health care professionals have encountered or used unauthorized AI tools and apps, known as “shadow AI,” in their workplaces. The second, from the Coalition for Health AI (CHAI) found high concerns among patients related to the use of AI in health care, with 93% reporting at least one concern about the use of AI in health care and 51% saying that AI makes them trust health care less. However, more than 80% said that trust would increase if clear accountability measures were in place.
Journalists can use these sources as background information or as a spark for story ideas.
More about shadow AI and the first survey
“Shadow AI” refers to the use of any AI tools or applications by employees or end users without formal approval or oversight of a health system or company’s information technology or security department. Examples of shadow AI include AI-powered chatbots, machine learning models used to find patterns in data, and data visualization tools to make charts. However, because IT teams are unaware of these apps being used, employees can unknowingly expose the organization to risks regarding data security and compliance, according to a blog post from IBM.
Wolters Kluwer Health sponsored an online survey of 518 full-time health care providers and administrators on their perceived use of AI tools. Some 40% reported they had encountered an unauthorized AI tool in their organization, and an additional 17% admitted to using one.
Most employees don’t use these programs with malicious intent, but just to make things faster. When asked why they used unapproved AI tools, about half said they did so for a faster workflow, and one in three said it was due to a lack of approved tools, or that any approved tools they had lacked their desired functionality. Some 26% of providers who reported using unsanctioned AI tools did so out of curiosity and experimentation.
Still, use of unsanctioned tools can have costly impacts. The average security breach in health care cost more than $7.4 million in 2025, the report said. Patient safety was cited as a top concern by respondents about AI risks in health care.
Health care organizations can help mitigate the risks of shadow AI tools by establishing systemwide guidelines for AI tool usage and communicating these policies to their employees, according to the survey, which pointed to a gap in these policies being communicated effectively. Among administrators, 42% said they strongly agreed what policies were clearly communicated, compared to 30% of providers.
CHAI survey showcases health AI and transparency
People are using AI but they don’t necessarily trust it, according to the CHAI report, which examined public trust in health AI. They surveyed 1,456 patients of mixed ages, race and ethnicities, and income and education levels. The survey was funded by the California Health Care Foundation and conducted by NORC, an independent research organization at the University of Chicago.
Three-quarters of respondents report using AI either intentionally or unintentionally, yet only 13% said they feel very comfortable with it. Notably, across all health care AI use cases, people strongly favor informed or “opt-in” consent when it comes to AI. This was highest in uses such as mental health, insurance and diagnosis. Preference for opt-in was strongest among younger, lower-income and less-educated respondents, as well as among Black and Hispanic respondents (~78%).
The majority of respondents (77%-85%) said providers should tell them when AI is used to write or summarize notes from medical visits, assist in making diagnoses, suggest possible treatment options and more. About 60% wanted providers to ask their permission before engaging in these uses.
Use of AI, even with disclosures, did not increase trust in health care, the survey found. In fact, in many cases, it reduced it. For example, 62% said they would trust their care a little less or much less if they were told that AI helped a health insurance company determine what care to cover. Respondents were a little more trustful of AI in helping match patients to relevant clinical trials or research studies, with 17% saying they would trust it from a little to much more.
The study also found that:
- Nearly two-thirds of respondents (64%) are open to their personal health data used to improve health care, while 35% reject it outright.
- Over 70% said the patient alone should own and control their health data.
- Nearly two-thirds (63%) said they were somewhat or very concerned about their health data being sold or shared for profit. Concern was highest among adults ages 45-59, those with some college education, Black and Hispanic respondents, and middle-income households.
- Over half (55%) of respondents said they were somewhat or very concerned that AI systems may treat some groups unfairly.
- The top concerns about using AI in health care were that doctors and nurses could rely on it too much instead of using their own judgment, that there might not be enough human oversight of AI decisions, and that AI could cause miscommunication or errors in care.
The report breaks down responses by what matters most to people of different age groups, education and income levels, genders, and races/ethnicities.
“Many are open to AI as a tool but resist the idea that clinicians should be compelled to rely on it,” the authors wrote. “As policy emerges, we need to consider that mandates or expectations for providers to use AI may outpace public confidence.”
Resources
- Wolters Kluwer Survey Finds Broad Presence of Unsanctioned AI Tools in Hospitals and Health Systems – Wolters Kluwer news release
- Shadow AI tool use in health care organizations – link to download the whitepaper
- Shadow AI tools and chatbots have widespread use in hospitals – Healthcare Finance
- CHAI Releases New Patient Survey Report on Health AI & Transparency – CHAI news release
- CHAI Patient Survey Report









