9 story ideas to cover now about AI-driven algorithm use in health care 

Share:

nurse sitting at desk

Photo by Jsme MILA via Pexels

All journalists writing about Medicare open enrollment need to know about an implicating new report the U.S. Senate Permanent Subcommittee on Investigations issued today (Thursday, Oct. 17).

The report is a stark warning for any senior thinking about which health insurer they would choose if they are enrolling in a Medicare Advantage (MA) plan or switching from traditional Medicare to MA (or vice versa). The subcommittee’s Democratic majority staff issued the 54-page report and cited the work last year of Bob Herman and Casey Ross of STAT News at least 10 times.  

In covering the subcommittee’s report, Herman and Ross wrote, “The nation’s three largest Medicare Advantage insurers increasingly refused to pay for rehabilitative care for seniors in the years after adopting sophisticated technologies to aid in their coverage decisions, a Senate investigation found.” The three insurers are UnitedHealth Group, Humana, and CVS Health targeted, Herman and Ross wrote, explaining that the insurers’ denials affected older adults in nursing homes, and in hospitals for inpatient rehabilitation and long-term care.

In 2022, those three insurers denied about a quarter of all requests for post-acute care among their MA enrollees, according to the congressional report. 

Last year, Herman and Ross reported for STAT on the dangers of using an AI-driven algorithm to assist doctors and other medical professionals in making health care decisions that limited the care of patients while driving up health insurers’ profits. Herman covers the business of health care for STAT, and Ross is a national technology correspondent.

Their series, which was a 2024 Pulitzer Prize finalist and a second-place winner in AHCJ’s 2023 Awards for Excellence in Health Care Journalism, described how health insurers required clinicians to use an unregulated algorithm to shorten length of stay for post-acute care patients recovering from serious illnesses, such as stroke and cancer. Physicians and other providers were directed to follow the algorithm’s guidance — even when they disagreed with those determinations.

Journalists can find numerous follow-up story ideas to pursue from their top-notch reporting.

More about this series

In their four-part investigation, titled “Denied by AI,” Herman and Ross showed how private equity investors and health insurers, such as UnitedHealthcare, Humana, Security Health Plan, and others, pushed the use of the AI algorithm, called nH Predict, to override clinicians’ judgment and deny care to seriously ill older and disabled patients enrolled in Medicare Advantage (MA) plans. MA plans have the highest underwriting profit margins of any type of health insurance, they wrote. The algorithm is a product from naviHealth, a division of Optum Home & Community Care Transitions, which is part of Optum Health and UnitedHealthcare.

The Pulitzer committee commented on Ross and Herman’s work, noting that they were selected “for exposing how UnitedHealth Group, the nation’s largest health insurer, used an unregulated algorithm to override clinicians’ judgments and deny care, highlighting the dangers of AI use in medicine.”

The series includes these four stories, published over 10 months in 2023:

9 follow-up stories to cover now

  1. How do the recommendations of an unregulated algorithm compare with scientific rigor? Ross and Herman wrote that MA plans have used “unregulated predictive algorithms, under the guise of scientific rigor, to pinpoint the precise moment when they can plausibly cut off payment for an older patient’s treatment.”
  2. How does Optum and its division, naviHealth Post-acute Care Solutions, use its AI algorithm in other health care settings? On its website, Optum Health says health plans, hospitals, health systems, post-acute providers, skilled nursing facilities, physician groups and accountable care organizations use the algorithm.
  3. How concerned are the employees of naviHealth and UnitedHealthcare about how the algorithm is used? Are patients aware of its use and the effects it can have on their care? Herman and Ross reported that employees of both companies worried about the use of nH Predict.
  4. Since STAT published those articles last year, what has CMS done to address the problems Herman and Ross cited? The articles show that CMS was reviewing the allegations and could take “necessary enforcement or compliance actions.”
  5. Has CMS reported the answers to questions it sent to health insurers and the results of audits? In October 2023, CMS sent a memo to MA plans saying officials would ask health insurers to explain how they comply with CMS’ coverage rules and about the use of their technology. The memo also said audits would begin in January 2024.
  6. What further actions could the Senate subcommittee take on the use of AI in health care? During a May 2023 hearing, members of a subcommittee of the Homeland Security and Governmental Affairs Committee heard testimony about care denials and delays in MA plans. During that hearing, senators told MA insurers they needed to follow Medicare’s coverage rules and cannot rely on algorithms to deny care patients need, Herman and Ross reported in this article, “Senators probing largest Medicare Advantage plans over how algorithms factor in care denials,” That story was not part of the series.
  7. What’s the status of a proposed class-action lawsuit that lawyers filed on behalf of plaintiffs who said their care was ended as a result of UnitedHealthcare’s use of AI? Herman and Ross reported that news in this article “UnitedHealth faces class action lawsuit over algorithmic care denials in Medicare Advantage plans,” published Nov. 14, 2023. In the lawsuit, the plaintiffs say the defendants illegally used AI “in place of real medical professionals to wrongfully deny elderly patients care owed to them” under MA and that the AI model has a 90% error rate.
  8. The series also provides a solemn reminder to journalists reporting on the use of AI algorithms in health care to exercise due diligence. Ask about how the algorithms are trained, how they’re being used, what medical professionals are analyzing and acting on the results, how often they agree or disagree with the program, what other health care employees involved with the program think, and what patients think (if relevant to the topic). 
  9. Are other insurers besides UnitedHealthcare, CVS and Humana using AI to deny care to members either in MA plans or other health plans?

The series also provides a good reminder for health reporters to do their due diligence when reporting on AI algorithms, making sure to ask questions such as: How were the algorithms designed? How are they being used? What do users think of the programs? What happens if they disagree with the algorithms’ recommendations? Are patients aware that these programs are being used? etc.

 Resources

Karen Blum and Joseph Burns