Ethical questions journalists should consider when reporting on AI in health care

Share:

By Rebecca Vesely

The health sector is implementing AI tools to help clinicians make care decisions at a fast pace. What are the ethical implications of these products, and what questions should journalists think about when reporting on AI?

I reached out to Danton Char, M.D., assistant professor of anesthesiology, perioperative and pain medicine at Stanford University School of Medicine, to discuss the emerging ethical issues. Char is researching the bioethics of machine learning and co-authored a piece in The New England Journal of Medicine earlier this year about the ethical challenges of implementing AI in health care. (Our conversation is condensed for clarity and brevity.)

Q: What are some of the concerns about AI applications in health care that journalists should know about?

A: Broadly, there are four areas where we see potential ethical challenges with implementing machine learning in health care.

The first is the question of bias in the underlying data. What was the algorithm built on? Biases in underlying data can be reflected in the algorithm generated from that data. An example of this type of problem is the attempts to use the Framington Heart Study to predict cardiovascular risk in non-white populations. They haven’t worked because of the underlying lack of non-white participants in the initial study data.

The second concern is the “black box” problem. There is an absence of transparency and a lack of understanding by health care providers in how algorithms are designed and how they function.

The third problem is designer intent and the unintended consequences of use. Algorithms can be designed to act in ethically inappropriate ways. A recent example outside of health care is Uber’s Greyball program (first reported by Mike Isaac of The New York Times). Greyball was designed to predict which customers might be undercover law enforcement officers, with the intent to circumvent local laws.

Similarly, an algorithm might be used in an unintended way. In health care, algorithms are beginning to be designed to predict 30-day mortality at hospital admission. That seems like a nice idea, but you can understand that patients in the last month of life are very expensive. The goals of optimizing health and maximizing profits are often not aligned. Algorithms could capitate patients based on severity scoring. Or they could predict who are the most expensive patients, and, based on that, make care recommendations from financial motives rather than for optimizing patient health. Conversely, algorithms could help improve care within 30 days of mortality by identifying these patients early and offering better-targeted care.

Fourth, there is the doctor-patient relationship and AI’s role: How do machine-learning systems impact the fiduciary relationship? What responsibilities do such systems have towards patients and who is responsible if decisions made from machine learning guidance lead to bad outcomes?

Q: Are patients more or less likely to trust their doctor knowing that doctor has some computer support in decision-making?

A: Eighteen months ago, I might have said some people would see it positively. But since the Cambridge Analytica/Facebook scandal, there is more awareness about the potential downsides.

Q: How would you describe the pace of AI adoption in health care today?

A: It’s breakneck speed. The normal statistical tools and ways to measure outcomes and effectiveness, like randomized controlled trials, you can’t do that. It’s problematic. The technology is new and not validated. It’s data-first type capture.

Q: What advice do you have for journalists covering AI in health care?

A: My advice to journalists is to have skepticism and ask whether (and how) patients are being helped. The stakes are high. It’s one thing to get algorithms wrong for people using social media. It’s another thing to get it wrong when a baby’s life is at stake.

AHCJ Staff

Share:

Tags: