ALS patients are using AI to preserve their natural voices. Here’s how one reporter covered it

Share:

screenshot of washington post story
Screenshot of an April 2023 Washington Post article written by disability reporter Amanda Morris about AI helping ALS patients save their natural voice. Image taken on June 16, 2023

As a disability reporter for the Washington Post, Amanda Morris has covered hearing loss, long COVID and the spread of fake sign language on TikTok, among other subjects.

 One of her recent stories showed how some patients with amyotrophic lateral sclerosis (ALS) are turning to artificial intelligence to bank their natural voices for use with assistive technology as the disease progresses. 

It’s a compelling read that combines great characters and multiple interactive elements. Readers can hear interview subjects’ natural and AI voices, watch a video of an ALS patient’s daily life and learn how these AI programs process and replicate speech patterns.

Morris talked to AHCJ recently about how she reported the story. 

(Responses have been lightly edited for brevity and clarity.)

What gave you the idea to do this story?

Amanda Morris
Reporter Amanda Morris

I was thinking about AI a lot and how it has started to become more prominent in health care spaces. We see this in a lot of different medical devices that now use AI to monitor conditions like diabetes. I was scrolling through Twitter one day and I saw somebody I followed posted about getting a new copy of their voice. I was like, “Well, this is interesting.” I hadn’t realized how big of a difference AI was making in the realm of artificial voices. As a society we often think about AI and artificial voices in terms of deep fakes, but I don’t think anybody had really thought about, in terms of news coverage, the more positive side of that voice technology and how it can help people.

How did you find the patients you profiled?

That was a long process. I reached out to several different ALS organizations and their local chapters, asking to be put in touch with people; I reached out to Team Gleason, which funds many of the voice banking services in the U.S.; and I reached out to people I knew on Twitter and others in my own network asking if they knew anybody who has or was planning to use the service. My final list of people that I interviewed was around 20. I wanted to make sure I interviewed a wide spectrum of people who were all along the process, to get a sense of what it was like in each stage. Some hadn’t lost their voices yet, and some had been using their synthesized voices for years. The four people we used in the story represented different stages of this process, as well as really compelling stories. It was interesting to see what having a synthesized version of their voice meant because it’s different for everybody.

Because some of the people you spoke with already had lost their voices, how did you do the interviews? 

For three of them, I sent a list of questions ahead of time to give them time to type out their responses. It took Ruth a long time to type because she has to type with her eyes (using assistive technology). It also took time for Brian and Ron to type because their hand strength is not what it used to be. I had a caveat that, during the interview, if I wanted to ask clarification or follow-up questions, I could. I also had them do the interview alongside somebody else who could help out answering more basic fact-checking questions like what month something occurred. If I asked personal questions of Brian and Ron, they could whisper answers to whoever else was in the room with them, and that person could repeat it to me. 

I really enjoyed the multimedia aspects that you included, like the video of Ron, and explaining how the AI voices work. How did you come up with the idea to include those?

For Ron (featured in the video), I went with a video journalist to Mexico because I felt it was important to see what his day-to-day was like. We spent two full days with Ron, watching him go through his normal daily routine of having his nurse come, get medications, get fed, get dressed, go for a walk with his wife, watch TV, etc. It really gave me a better sense for how he was using his voice but also what it was like for him to go through this process. 

I worked with a team of multimedia journalists and editors on this story and we changed our minds a lot on what elements to include and how to include them. The video journalist working with me on this story, Alexa Juliana Ard, suggested we have people speak directly to the camera and speak for themselves. I just loved that idea. Initially, we wanted to play videos of the individuals before they were diagnosed with ALS to show the difference in their natural and synthetic voices, but the quality and length of the videos everyone gave us was different and we wanted everyone to get similar treatment. 

What was important to you in telling this story?

The most important part was more about the audio element of capturing those voices. I kept thinking about what their voice represents for each person. Ron is a jokester. He’s very eloquent, very verbose, very patient. Ruth was more of a pragmatic person. I wanted to capture a little bit of each person’s personality, and that’s why I think the pictures, video and audio helped, with the word limits I had, bring everybody to life. That was what I cared about the most — making sure that people reading the story felt connected to each person. 

What advice do you have for other writers who don’t normally cover health IT or tech?

I did a piece recently for The Open Notebook (a website that helps science, environmental and health journalists sharpen their skills) about how to cover assistive technology. I don’t cover technology as a beat but I do cover assistive technology as part of the disability beat. A common fallacy that a lot of people fall into when covering assistive technology is that they universally think of it as great no matter what, and act like the technology solves every problem. I don’t think that’s the case a lot of times — it’s more nuanced than that. I also think that people who cover assistive technology don’t always talk to the users and don’t always think of the users of technology as active users. People think of iPhone users as very passive, but the users shape the technology a lot. We hack it. We use it to fit our own needs, for uses beyond what the creator could have imagined. 

An interesting element of any technology story is: How is it being used differently by different people? What problems is it not yet fixing? What problems is it helping with? Asking yourself those questions and interrogating the technology will lead to a more nuanced, accurate story.

Karen Blum

Karen Blum is AHCJ’s health beat leader for health IT. She’s a health and science journalist based in the Baltimore area and has written health IT stories for numerous trade publications.