As medical AI tools have proliferated, STAT’s health tech reporters have been at the forefront of reporting on bias, ethical issues, and evidence surrounding the systems, including with this series of stories from 2020.
The growth of AI in medicine has largely happened out of the public eye – and often, without the knowledge of the health workers tasked with using the tools or the patients whose care they affect. The data used to train the tools is often hidden, and the regulation of them remains scattershot.
An investigation by Casey Ross revealed that a common method of using analytics software to target medical services to patients who need them most is infusing racial bias into decision-making about who should receive stepped-up care. That reporting turned up evidence of bias within multiple algorithms, drawing new attention to the ways in which that bias unintentionally worsens existing inequities in the American health care system.
Another investigation, conducted by Rebecca Robbins and Erin Brodwin, found that while AI tools are increasingly guiding decision-making in the clinic, patients are largely not informed that the experimental technologies are being used in their care – shedding light on an issue that even some clinicians using the tools told STAT they hadn’t considered. In another feature, Brodwin used Duke’s AI efforts as a case study to examine how the rollout of such tools can create problems for people who fall lower in a hospital’s hierarchy, such as nurses and medical technicians.
Those investigations, along with another feature from Robbins examining the use of AI in end-of-life care, raised profound questions about transparency and consent with regard to these tools.
Understanding these questions and risks – and weighing them against the potential benefits – requires a deeper understanding of AI itself. That’s where STAT’s own knowledge-based AI, the STAT Terminal for Artificial Computer Intelligence, or S.T.A.C.I., comes in. The tool is designed to inform a broad swath of the public – and not simply those with a deep technical understanding of machine learning – about the role and risks of AI tools that are being used in their care.