Dr. Oz touts ‘AI avatars’ to address rural mental health care shortage. Experts urge caution

Karen Blum

Share:

CMS Administrator Dr. Mehmet Oz talks into a microphone next to President Trump and HHS Secretary Robert F. Kennedy Jr. in July 2025

CMS Administrator Dr. Mehmet Oz in July 2025. Public domain photo

Centers for Medicare and Medicaid Services (CMS) Administrator Mehmet Oz, M.D., MBA, recently suggested that artificial intelligence (AI) could offer a way to help rural communities access mental health care amid a shortage of providers.

“There’s no question about it — whether you want it or not — the best way to help some of these communities is gonna be AI-based avatars,” Oz said, according to Georgia Public Broadcasting and other media outlets.

The reality is that this field is still emerging, and while some AI platforms and programs may play a role in supporting mental health care, they are not qualified to manage health care independently of a licensed mental health clinician.

“There’s not good evidence at this point that AI can deliver effective mental health care,” said John Torous, M.D., director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston, in an interview with AHCJ. He highlighted the need for rigorous, scalable studies. While research on these programs is rapidly developing, none of the large technology companies including Google and OpenAI claim they can deliver mental health care, Torous said — instead using language to indicate products can offer wellness or emotional support. “It’s telling if none of them claim to deliver psychiatric care with AI.”

It’s not the first time Oz has pushed AI as a solution. Last April, during his first town hall with CMS staff, he claimed that if a patient went to a doctor for a diabetes diagnosis, it would cost $100 per hour, while an appointment with an AI avatar would cost about $2 per hour, according to Wired. He also said that patients rated the care they received from an AI avatar as equal to or better than a human doctor, the magazine said. It wasn’t clear from the reporting where those statistics originated. 

In the February meeting, Oz proposed using AI platforms to conduct early mental health intakes, claiming they can pick up subtle nuances in how people communicate, according to Mobihealth News

Some examples of AI avatars used in therapy

Avatars are essentially a way to put a human-like face on an AI technology while a person interacts with it, Torous said: “What really matters is the content being delivered.”

Avatars and AI programs may be helpful in some cases as one component of mental health care — under the guidance of clinicians — but they’re still being studied. In 2024, I wrote about research in the United Kingdom in which people with schizophrenia interacted with avatars representing the voices they hear, helping reduce their distress.

At AHCJ’s Health Journalism conference in Los Angeles last year, one of the field trips  introduced us to Xaia, an AI-powered avatar designed to supplement mental health therapy at Cedars-Sinai Medical Center by offering emotional support and conversation. To interact with Xaia, users wear a virtual reality headset and see calming images like a beach or meadow while practicing deep breathing exercises and meditation, according to a hospital news release

Studies highlight AI’s limitations in mental health care

A range of studies suggest that the technology can fall short when it comes to triaging health conditions and engaging in therapy-type conversations. “It would be nice to give everyone access to a very smart computer model that would be easy to interact with and accurate, but at this point, it’s tricky,” Torous said. 

Many studies have had no control group to compare to an active AI intervention group, he said. For example, one small study in NEJM AI last year from Dartmouth researchers found users of a generative AI chatbot experienced reductions in major depressive disorder, anxiety and eating disorder over a four-week period, but the comparison group was on a waitlist, not given another type of therapy. 

Here are some other examples of studies making headlines lately:

  • One study in Nature Medicine found that in a test involving 60 medical cases that the large language model ChatGPT Health under-triaged 52% of potential cases and got questions about suicide “wrong 100% of the time,” Torous said. 
  • A study from Brown University in Rhode Island that analyzed counseling-like conversations with chatbots found the programs failed to meet essential ethics standards and sometimes reinforced harmful beliefs, displayed cultural prejudices and mishandled suicidal thoughts, according to HealthDay. Authors wrote that these AI platforms “are being used in ways that can lead users to overestimate the types of roles that these systems can assume.”
  • A study from Aarhus University in Denmark found dozens of cases where chatbot therapy worsened mental illness, increasing mania and suicidal thoughts, aggravating eating disorders and contributing to worse delusions, HealthDay reported.

Several cases of suicide have been linked to chatbot use, and some families of those impacted have filed lawsuits against OpenAI and Character.AI. 

Many remain cautious 

Carrie Henning-Smith, Ph.D., MPH, associate professor at the University of Minnesota and co-director of its Rural Health Research Center, told NPR using AI avatars would strip human connection away from health care. She also raised concerns over testing unproven technology on already-underserved populations, and unreliable broadband, warning that this approach could contribute to already-fragile trust in the medical system.

Meanwhile, states have been cracking down on the use of AI for behavioral health care when it occurs without involvement of qualified experts, as I wrote about for AHCJ last summer. Illinois was the first to enact legislation banning the use of AI for mental health or therapeutic decision-making without oversight by licensed clinicians. More than 250 bills targeting AI in health care were proposed in state legislatures across the country. These efforts are continuing. For example, a bill introduced in Colorado last month would require AI chatbots to adhere to a series of regulations aimed at protecting children and preventing suicide.  

Story ideas 

There are a number of story angles in this ongoing development, as I wrote in this AHCJ tip sheet. Journalists also can track new studies as they emerge, making sure to ask critical questions, and examining how AI will be incorporated into states’ plans for rural health transformation under new federal grants from CMS, which were awarded to all 50 states

They also could write about more comprehensive behavioral health programs for rural communities that incorporate aspects of AI. In a collaborative care program developed at East Carolina University, children are screened for anxiety and depression during primary care visits at five or six clinics in rural counties in North Carolina. 

Once children are diagnosed, technology is used to enhance access to care, through telepsychiatry appointments with a child psychiatrist and through a virtual reality “community house” on the Roblox gaming platform where children can play while learning and getting peer support from others, Healthcare IT News reported.

The program’s AI system collects data points such as patient engagement while the kids play. The platform also has educational videos about behavioral health conditions that family caregivers can watch. A presentation about the program’s first 34,000 screenings and the 1,000 diagnosed cases of youth anxiety and depression will be presented on March 11 at the 2026 HIMSS Global Health Conference & Exhibition.

Resources

Karen Blum

Karen Blum

Karen Blum is AHCJ’s health beat leader for AI and Patient Safety. She’s a health and science journalist based in the Baltimore area and has written health IT stories for numerous trade publications.