How STAT’s Brittany Trang makes sense of AI in health care 

Karen Blum

Share:

Headshot of STAT's Brittany Trang, who covers AI in health care through the AI Prognosis newsletter

Brittany Trang

For the past 14 months, health tech reporter Brittany Trang has been writing the weekly AI Prognosis newsletter for STAT, covering topics like what malpractice insurance has to do with AI, why doctors and patients have different AI chatbot experiences and why some hospitals are making their own ChatGPTs for patient records.   

Trang says she discovered her passion for journalism through science. While studying English and chemistry as an undergraduate at The Ohio State University, she noticed that many researchers struggled with the necessary writing skills to apply for grants and to explain their work clearly. She took some science writing classes and began freelancing while completing her Ph.D. in chemistry with a focus on removing PFAS — the “forever chemicals” that can remain in the environment and human body for decades. 

In this “How I Did It,” Trang discusses her journey from science to journalism, and how she cuts through the hype surrounding AI in health care.

(Responses have been lightly edited for brevity and clarity.) 

How did you transition from science to writing?

Originally I wrote about what it’s like to be a scientist or a grad student, and then it turned into reporting. I did the AAAS mass media fellowship. I worked at the Milwaukee Journal Sentinel for a summer and liked it way more than I thought I was going to, so the last year of my Ph.D., I went back and forth: Should I do a postdoc? Should I switch to journalism? I changed my mind every two weeks. You can see where I ended up. I was a Sharon Begley Reporting Fellow at STAT and joined the staff after. 

Your AI Prognosis newsletter marked its one-year anniversary in January. How did that newsletter come about?

It was an editor pitch. We were early to [covering] AI in health care. My colleague Casey Ross has been writing about it for years. Obviously, with AI just being everywhere, it’s something that we weren’t going to be able to escape. So we thought we could be an authoritative voice.

What’s a typical week like for you? When are you writing versus planning for your newsletter?

The newsletter comes out on Wednesdays, and today [while we’re speaking] is Wednesday, so I should be done for the week. But I have Friday off, so I realized, “Oh no, I need to figure out what I’m doing next week.” It basically takes all day Tuesday and a lot of Monday, and I have to plan for it in advance, whether that’s figuring out what the topic is, or setting up calls, usually the end of the week before. It kind of feels like it never stops.

What makes a good story for you?

I think a good story idea is not something that’s necessarily news, but something that says something about the larger world of health or medicine and AI. Sometimes it is something that is in the news. Sometimes it’s a new study that comes out. Sometimes it’s a new thing that a company has announced. But, sometimes it’s, “Let’s talk to an interesting person who has interesting insights into how AI is being used in their corner of the industry, or how this might affect the rest of the field.” That also is helpful, because the idea is not that the readers are super keyed into AI. The idea for me is that maybe this is the one thing that somebody reads about AI and health care this whole week: What’s something that is tangible that they can leave [with] and apply if they’re in a meeting and somebody brings up something about AI? What’s a principle that is helpful for them to think through what’s going on and what’s important?

How much are you traveling versus working from your home office in Ohio?

I work from home all the time when I am not traveling to go to a conference. Probably once every other month is about how much I travel for work. This year, I went to CES (the big Consumer Electronics Show in Las Vegas). I just went to New York to moderate a panel at a health tech conference, and the week before that, I got to go to Eli Lilly in Indianapolis.

What’s important to you in telling these stories? 

It may be the scientist in me, but I get really annoyed when people just say a lot of nothing. So I like to make sure that we’re saying something, and that the people I’m talking to and the ideas that we’re talking about in AI Prognosis are important and are things that maybe people aren’t talking about, or maybe try to cut through all of the hype, because it’s not my job to repeat a press release. It’s my job to say hey, these are also things that you should consider, like, this doesn’t get enough attention, or that this can become a problem and people aren’t addressing this. I’m trying to say something substantive instead of trying to just repeat all the hype that’s out there, because you can get that anywhere. 

What is a big trend in AI that you’re seeing now?

I think what’s really coming home is using AI for, essentially, what will immediately increase your margin. For right now, I think that is the revenue cycle. People are converging on “how do we get at where dollars in health care come from,” which is billing for services and getting paid for them. At the conference I went to in New York, Medicare Director Chris Klomp said, “What a disappointment that some of our greatest minds building technology right now are all around revenue optimization.”  

There was just a study from Blue Cross Blue Shield saying, we see evidence of providers using AI to code for [items] that they’re not treating people for. Thus, it’s just coding for more stuff so that we can get paid for it. And recently, R1, a big revenue cycle company, and Heidi, a fairly large AI scribe company, announced a partnership where they are planning to use documentation to support putting more codes on there. They say it’s to make sure everybody gets paid for the [things] that they’re doing, and to make sure it’s complete and optimized for reimbursement. But, they’re getting down to trying to use the AI tools that we have to make more money. That’s one of the big things that I see that I’ve been trying to hammer home to people for six or seven months at this point: This is happening, and it’s only going to get more and more obvious.

Do you have advice for other journalists looking to write about AI or get into this space?

We serve the public, we serve the patients. And a lot of stuff that’s happening [in this space] sounds good until you dig down into it and find out that all these things that you assumed happened — that they tested this, that they asked patients about it, that they checked the performance against this — none of that is happening. People are just rolling this out and just hoping nobody is going to call them [out]. There’s no mechanism right now for them to get held responsible for their claims, or to have some sort of basic testing that they have to do before rolling out tools. Really question what’s at stake here, and are the people who are rolling this out testing all the things that could go wrong? There’s a hazard and risk trade-off, but I think there’s a lot of tools that are being rolled out where you assume that they have tested a bunch of stuff and they haven’t, and that is not in the patients’ or the health-care-system-at-large’s best interest.

Karen Blum

Karen Blum

Karen Blum is AHCJ’s health beat leader for AI and Patient Safety. She’s a health and science journalist based in the Baltimore area and has written health IT stories for numerous trade publications.