How Boston Public Radio reporters tackled artificial intelligence in health care 

Share:

Meghna Chakrabarti                                                                                Dorey Scheimer

WBUR radio host Meghna Chakrabarti was visiting her brother on the West Coast last summer, enjoying a glass of wine when he said he thought artificial intelligence was going to change civilization. While the two went on to discuss other topics, the idea stuck in Chakrabarti’s mind, and she and senior editor and colleague Dorey Scheimer started researching the topic. Their original four-part series, “Smarter health: Artificial intelligence and the future of American health care,” aired in May and June on the Boston-based program “On Point.” It’s well worth a listen (or a read, the transcripts are posted online, too).

Chakrabarti and Scheimer spent four months researching and reporting the series. They spoke with about 30 experts across the country, including physicians, computer scientists, patient advocates, bioethicists and federal regulators. They also hired Katherine Gorman, who co-founded the machine intelligence podcast “Talking Machines,” as a consulting editor. The result is an in-depth look at how AI is transforming health care while addressing ethical considerations and regulation of the technology, the people developing it, and patients at the receiving end.

In a new “How I Did It,” Chakrabarti and Scheimer discussed their reporting process and more. (Responses have been lightly edited for brevity and clarity.)

Why did you decide to focus on AI in health care?  

Chakrabarti: As a show, we’re naturally inclined to think about ways that major changes are happening in how we live that we don’t fully understand and that need more in-depth examination. At first, I wanted to do a major series on how AI will change civilization. 

Scheimer: That’s where I came in to crush the civilization idea (laughs). Meghna was sending me lots of links. I was reading on my own and trying to figure out where we could do the most within AI, and medicine and health care just kept coming up. We’ve done a lot of shows on tech and surveillance. It felt like the stakes of AI in health care were much higher than in other industries because it’s our health. On top of the fact that there was so much money going into AI in health care, it felt like a good space for us to focus our reporting.

Chakrabarti: The technology, if done right, could bring [benefits] to health care both in costs and outcomes. This is one sector in which AI will touch everyone. And it’s very easily understandable that no matter who listens to which episode, they’ll be able to relate somehow.

Can you discuss your reporting process? How did you find experts and decide who to feature? 

Scheimer: I came into this knowing nothing. I started by talking to some big thinkers in this space who had written reports to wrap my head around how we could focus the series. We knew we were doing four episodes. To pull that off, we needed a coherent plan for what we wanted to accomplish. I had phase one of research and reporting then I started to get more granular and specific with the kinds of people that I was talking to. For as big of an industry as it is, it’s a pretty small space. The same names kept coming up. 

It was really hard at the beginning. We had a grand plan that we were going to take one example of AI that was already present in health care and medicine and use it as the start of each episode. That did not pan out because I faced so much resistance from companies at the start of our process. There’s a big hesitancy from the industry that the media will paint AI as robots taking over and replacing your doctors. I was surprised by just how hard it was to get people to talk in the early months of reporting.

Chakrabarti: It’s important to note we also were dealing with natural sensitivities because it was health care. One of the things process-wise that was very important that Dorey did quite brilliantly is building trust with  sources so that they recognized that we would maintain our journalistic independence and integrity. None of it was going to end up being an ad for their technology, but at the same time, it was going to get fair treatment. If Dorey had been unable to do that, we would not have had a series period. 

Scheimer: Internally, I had to answer questions about why this series took the time it did. For instance, I’m going to do a show about the airlines next week. Within a couple of hours yesterday afternoon, I felt pretty read in and ready to move forward. With this, I felt like I had to have a different level of knowledge and understanding to be taken seriously by the kind of guests that we wanted. That was a different kind of process.

Chakrabarti: We were hyperfocused on getting the facts right and trying to make sure that we gave a fair representation to quite complex ideas and concepts while also making them accessible. We would go back to people with lists of dozens of questions. All of that had to be incorporated both into the reporting of the produced pieces and in the live conversations within each hour. There was this constant loop of trying to make the information more and more detailed. What went hand in hand with that was a lot of this information is published in journals. So we were pulling a lot of papers and reading them so that we could accurately reference things that were in the scientific literature.

How did you engender trust with sources? Any tips for our members? 

Scheimer: Do your homework. I had to go into those interviews with a level of understanding that allowed me to immediately start to have some rapport with the experts I was talking to. I can listen back even to a couple of the earlier interviews where I asked potentially dumb questions. Fortunately, those people were kind enough to educate me, but I could hear in later interviews that I was able to get much more and much deeper conversations when I had a base of knowledge. 

Also, like any other story or interview, asking what you don’t know and giving people a chance to reflect on their role and why perception gaps [with AI] existed. I found that even physicians at major hospitals felt immediately defensive about AI, like no one understood that there could be benefits, and all they saw was how bad it could be. Giving people a chance to say, “Here’s why I think there is good that can come from it,” just asking that question, helped in a lot of interviews.

I observed a few themes in your series: The technology has huge promise. People are concerned that humans remain in charge of information. There is a need for transparency with patients. Are there others you noticed?

Scheimer: Liability is a huge question mark still. Nobody knows who is responsible for the application of algorithms, whether it’s the developer, the hospital system or the physicians themselves. That’s leading to a lot of hesitancy to adopt the technology because health systems don’t want to take the risk. There are a lot of questions still about the cost, both the technology and the impact on the cost of health care. What we didn’t cover fully but definitely got to in the fourth episode was the role of payers and how insurance companies might play a role in whether or not AI can reach its potential.

Chakrabarti: I give Dorey’s sources credit for being candid both about their optimism and their realism when it comes to AI in health care. Everyone said this has great potential if we do it right. The if we do it right part has to do with development but also regulation. That’s a huge theme that I would love to see smart people report a lot more about, like how to get regulation to catch up with the technology, specifically in health care. We did an episode about that but it didn’t have any good answers in it because it’s still so undefined. 

The other one was the question of who is the technology being developed for. Everyone says they want to help the patient, but sometimes a particular AI program or technology is good for the health system, and that doesn’t necessarily translate into being good for the patient. In a health care system like ours, whose financial dynamics are pretty unique compared to the rest of the world, that’s a really important question. 

Was there anything that you were surprised by during your reporting? 

Scheimer: In our third episode on regulations and the FDA, it really surprised me just how ill-equipped the federal agency tasked with regulating this space is. Even as they make some efforts, it does not feel like we’re anywhere close to being in a place to adequately regulate this.

Chakrabarti: People are trying to be very thoughtful in this space, at least the developers and physicians and health care systems people we talked to. They are very willing to grapple with the big questions, and it seems like they at least are trying to do so. What I hope our series accomplished is bringing those questions, pulling them into the public sphere a little bit more. The other thing that surprised me as a journalist and also as a patient particularly came out in the first episode, is how much AI is already in use in the health care system. There’s a lot already in play, and it’s having an impact on care and insurance, decision-making, etc. That was quite eye-opening.

Scheimer: I completely agree. I was quite heartened to hear how people are coming to this with the intention of fixing a problem in our health care system. Ziad Obermeyer [a physician and guest in episode 1] is a great example. He was an ER doctor, and he was so frustrated by his inability to know when a patient was having a heart attack that he’s now focused on researching how AI can predict that. I think that people are coming to this problem with the intention to do the most good for the most patients. It will be the fault of our system if they are not able to accomplish that.

Is there a take-home message that you hope listeners took with them?

Scheimer: I hope we gave listeners the tools to understand their care better, to go into a doctor’s office and ask if AI is involved in their care and how that’s impacting their care. I think most patients don’t know that an algorithm is being run to do this. That awareness and deeper understanding from patients I think will help going forward.

Chakrabarti: I agree. Creating awareness of things that were previously in the dark is just a hallmark of what I think the fundamental purpose of journalism is. It’s like, “Hey, things are changing in something that will have an impact on everyone. At least here’s your chance to learn a little bit about it.” 

We also have some takeaways for journalists because of the mix of positive and negative of what could happen with AI in health care. As one of our guests said, especially in the second episode, we can’t really put the determination of that on patients. We have to put the determination of that on the system, on hospitals, on regulators, etc. to “get it right.” That message has really driven home to Dorey and me. That’s something that I know we’re interested in continuing to pursue to see if the getting-it-right process is coming along as it should.

Karen Blum

Karen Blum is AHCJ’s health beat leader for health IT. She’s a health and science journalist based in the Baltimore area and has written health IT stories for numerous trade publications.