How I Did It: From ‘internet rabbit hole’ to compelling series

Jyoti Madhusoodanan

Share:

doctor at desk

Photo by cottonbro studio via Pexels

Katie Palmer typically covers health technology. But when she stumbled across a paper about how a risk calculator might place Black women at higher risk of complications from a vaginal birth after a Cesarean section (VBAC), she found the topic “too interesting and important not to cover.” 

Katie Palmer
Katie Palmer

This September, Palmer and her STAT colleague Usha Lee McFarling teamed up to produce “Embedded Bias,” a series of deeply reported stories on medicine’s struggle to address the use of race in clinical algorithms. The package spans several medical specialties and is the product of more than 100 interviews and deep reporting over the course of a year. 

Here, Palmer shares the process of creating the series and her strategies for keeping pace with a fast-moving area of health research.

This interview has been edited for length and clarity.

How did the idea of a series come up?

It was our editors’ doing: I had written a few stories on the use of race, and Usha had been reporting on health equity for many years. Her editor Gideon Gil saw one of the stories that I wrote about a colorectal cancer risk algorithm and the impact on prediction accuracy when you removed race from it. So then both our editors brought us together and were like, “Maybe we should make it a series.” That was about a year ago.

How did you approach the reporting, and how did the series get its final shape? 

We knew it was going to be massive when we started, but I couldn’t have conceived just how complex a topic it was. We did more than a hundred interviews across so many different sectors of medicine, spanning researchers, clinicians, health system executives, the leaders of medical societies, federal officials.

The biggest challenge for us, reporting-wise, was that all of these conversations were happening simultaneously in each specialty. Early on we thought about doing mini stories on individual algorithms. It ended up being a little bit more conceptual in the end because we wanted a more comprehensive treatment of the topic. It was a very slow, painful process to determine how to split the reporting into different stories focusing on, in some cases algorithms, but in some cases the different settings and groups that are having to grapple with these issues. 

What were the big challenges with reporting?

It’s a very academic conversation and abstract in many ways, but these are algorithms that are having immediate impacts on patients. In many cases, it’s not clear that the adjustments to remove or replace race are impacting the patients who are most at risk. As much as that is their intent, there are some populations that are just so disadvantaged that that is a drop in the bucket. 

The hardest part was trying to encapsulate all of these issues in the kickoff story — and doing it in a way that would be meaningful for a wide range of readers who have no background in these topics.

What are your strategies for keeping up with preprints and peer reviewed literature on such a sweeping topic?

My favorite tool to keep up with the scientific literature is called Semantic Scholar. It’s basically an AI-supported paper database. You have to put a little bit of effort into creating folders and feeds for sectors of research that you’re interested in. But once you tag a bunch of papers in an area, say race-based clinical algorithms, it will start surfacing new papers to you in a way that goes beyond just searching for a keyword. Even if they’re in lesser-known journals that don’t have embargoed press releases and things, you can still capture the way that a conversation is evolving in a space. 

How did you build connections with researchers and expert sources? 

It’s helpful to talk to one person who can offer perspective across a lot of these different fields. Shyam Visweswaran at the University of Pittsburgh was one of the first people that I spoke to, because the existence of this database of algorithms that he had built was central to the series from the beginning. 

It took about a year to do all the reporting, and I probably spoke to him every three months or so. What helped this conversation continue is that I was genuinely invested and interested in the work that he and his lab were doing, so I was staying on top of the preprints as they came out. He didn’t necessarily keep me up to date on them. I had my own methods to keep track and when something popped up, I’d reach out to check on that particular new work. 

Showing your interest in that way and doing that research before reaching out — I think it probably ups your chances of building a connection. It shows that you truly respect their work, and that the conversation is going to be valuable and interesting for them too. 

It can be hard to connect the dots between these algorithms and the real world impact on patients. How did you navigate this problem in the series? 

We did not speak to that many patients for exactly the reason that you described — it is just really rare for patients to even know that a race-based algorithm is being used in their care. One reason why the conversation around eGFR has been so impactful is because you can see the race correction on the lab reports for the kidney function, so it became clear to Black patients that they were being treated differently and it was impacting their access to kidney care. But that’s an extreme outlier. One thing we’re hoping is that surfacing these nearly 50 tools that are used to varying degrees in current clinical care will inspire some patients to question when and how decisions are made. Maybe things will change as these issues become better known in public conversation.

Jyoti Madhusoodanan

Jyoti Madhusoodanan