News stories offer cautionary tales for journalists covering AI and tech

Share:

AI chatbot

AI chatbot
Photo by Mohamed Hassan via Pixabay

Health care organizations have been largely embracing artificial intelligence programs and tools to help in areas like searching records and medical documentation. Although the computing technology is powerful and the systems are designed to learn as they go, recent news stories published by National Public Radio, STAT and the Wall Street Journal highlight that these systems are imperfect, and human input is still necessary and valuable.

NPR and STAT reported on the National Eating Disorders Association’s decision to shut down its volunteer-staffed national helpline and instead use a chatbot (an artificial intelligence program that simulates conversation) named Tessa. (See links to the stories below in resources.)

The organization’s leadership seemed to have had good intentions, reasoning that Tessa could respond faster and help more people. Unfortunately, within one week the chatbot tool was recommending diet and weight loss tips, which can be triggering and perpetuate the conditions of people with eating disorders. 

After users shared concerns on social media, the association announced it was indefinitely disabling Tessa. At time of publication, the organization’s website still listed information for an independent crisis text line staffed by trained volunteers.

One anonymous volunteer told STAT that in ending the helpline, NEDA missed an opportunity to use AI to automate database searches or find up-to-date provider information, which would have streamlined some of the time-intensive work of collecting resources for callers. Those tasks led to the longer wait times Tessa was meant to reduce. 

In another story, the Wall Street Journal interviewed several nurses about their experience with artificial intelligence alerts and algorithms, reporting how the programs’ output and suggestions sometimes dangerously contradict nurses’ expertise and clinical judgment. 

In one case, an oncology nurse received an alert that one of her patients may have sepsis. Although she thought the alert was erroneous, she had to follow protocol by taking a blood sample, potentially exposing the patient to infection and adding to his bill. Another nurse on a call-in advice line listened to a patient’s symptoms and, following protocol suggested by the algorithm, diagnosed the patient with cough, cold and/or flu and scheduled a phone appointment with a doctor several hours later. That patient was later diagnosed with pneumonia, acute respiratory failure and renal failure. He died several days later.

“Whether a nurse is confident enough to trust her own judgment to override an algorithm often depends on hospital policy,” reporter Lisa Bannon wrote. The article also mentioned a National Nurses United survey in which 24% of respondents said they had been prompted by a clinical algorithm to make choices they believed were not in patients’ best interests.

Lessons for journalists 

Both news stories serve as good reminders to practice due diligence when reporting on new technologies. When covering health care systems’ adoption of artificial intelligence technologies, it’s important to go beyond inquiring why they are adopting the technologies or what they hope to achieve. It’s crucial to ask who’s going to mind the store. 

Who will be responsible for monitoring the technology to assess how well it’s working? How will they assess the technology’s performance, and how often will they do so? It’s also important to get feedback from users, whether they be health care workers or patients. What do they like or dislike about the technology? How is it helpful, and what are its limitations? Is it meeting its intended goals?

Keeping these questions in mind is crucial, as AI technology only seems to be increasing in scope. According to a recent story in HealthTech magazine, emerging uses for AI in health care include:

  • Computer vision technology to automatically track surgical patients.
  • Analysis of real-time situational data to predict patient outcomes and adjust care.
  • Voice-activated technology to manage clinician documentation and route inpatient requests to the right department.

In addition, health systems including UNC Health in North Carolina, UW Health in Wisconsin, Stanford Health Care, and UC San Diego Health are piloting generative AI technology to help physicians respond to patients’ questions in online portals, Becker’s Health IT reported. The technology will craft the initial draft, and physicians can review and edit it before sending. An article from UC San Diego noted that messages will be clearly marked with disclosures stating the message was automatically generated and reviewed by a physician. 

I have also seen news reports of hospitals using AI to create automated transcripts of patient encounters, read and analyze patients’ electronic medical records to offer a list of clinical trials they might qualify for, and offer wellness features like guided meditations and recommendations for outdoor recreation. Other programs are designed to help make predictions for situations like delirium in the intensive care unit, short- and long-term lung cancer risk, and even physician turnover.

Some health systems such as Northwestern Medicine in Chicago and Duke Health in North Carolina, and the Department of Health and Human Services, have added chief AI officers to their executive teams. They, along with biomedical ethicists, will be good sources for journalists covering AI as we go forward.

Resources:

Karen Blum

Karen Blum is AHCJ’s health beat leader for health IT. She’s a health and science journalist based in the Baltimore area and has written health IT stories for numerous trade publications.