AI psychosis: How Mashable’s Rebecca Ruiz covered the growing phenomenon

Share:

A person holding a phone with ChatGPT pulled up on the screen

Photo by Sanket Mishra via Pexels

Mashable senior reporter Rebecca Ruiz was one of a few journalists who wrote recently about the phenomenon of AI psychosis, a type of altered mental state characterized by paranoia and delusions that can occur after a period of intense use of artificial intelligence chatbots such as ChatGPT. 

Reporter Rebecca Ruiz
Mashable senior reporter Rebecca Ruiz

In the story, Ruiz spoke with mental health experts — one of whom has hospitalized a dozen people this year — about the condition, and described some of the warning signs and ways to get help. Symptoms include sudden behavior changes, like not eating or going to work; belief in new or grandiose ideas; lack of sleep and disconnection from others.

Ruiz’s hope for reporting on these types of stories at the intersection of technology and mental health is “that we’re making sure as journalists that we’re not stigmatizing or sensationalizing some of the mental health issues that arise.” There are good reporter guides out there. For instance, the American Foundation for Suicide Prevention has ethical reporting guidelines on suicide (Ruiz is on AFSP’s ethical reporting advisory committee). 

In this “How I Did It,” Ruiz, an AHCJ member, shared her reporting process, her concerns about AI chatbots and why she sees these as one of the biggest public health stories of the year.

The following conversation has been lightly edited for clarity and brevity.

How did you get the idea to do this story? 

It came in a couple of ways. The first was seeing the New York Times report on how ChatGPT could seed and reinforce delusions with the user during very prolonged conversations. After I read that story, it was on my mind. Even though that wasn’t necessarily about the user themselves having an extreme case of psychosis, it raised this really important question about what happens when users develop grandiose thinking and symptoms or signs of psychosis in relation to their AI chatbot use. 

The following week, I happened to see an X post by a psychiatrist at the University of California, San Francisco, who said that he had already admitted 12 people to the hospital this year for AI-related psychosis. I thought, “That’s an important data point in this broader conversation.” So, I reached out to him and he agreed to be interviewed. From there, I reached out to two other sources who I thought would be strong complements to the psychiatrist perspective.

How did you find those other experts? Were they already contacts of yours, or did you seek them out specifically for their expertise on AI? 

One was a contact of mine already, an expert at the American Psychiatric Association (APA). I spoke with Dr. Darlene King earlier this year about psychiatrists and mental health professionals who might be using AI scribes during therapy sessions. We discussed the ethics and privacy and security dimensions of doing so. She’s the APA expert on mental health IT, so I wanted her opinion on the phenomenon of people experiencing psychosis in relation to their AI chatbot use. 

Etienne Brisson, founder of The Human Line Project, had been referred to in the New York Times story, but I wanted his insight because he has spoken directly to a number of people who say they have experienced AI psychosis, and had a family member who had a very similar experience. That family member had brought him in for a conversation about the delusions that they were having, and that sort of sparked his own interest in founding this project to try to help people who have had this experience connect with others. Part of that is the peer-to-peer dynamic of knowing that you’re not alone, recovery is possible and help is available. 

How did you go about putting your own spin on a story already out there?

The Times story was great because it highlighted how AI chatbots can actually seed delusions to the user. And it noted how, over long periods of time, the chatbot can become less effective, less accurate. If there is a factual mistake early on in a conversation, it’s going to continue building on that until it is so wrong that the implications or consequences can be really damaging and harmful for the user. But it also highlighted how the user themselves might start to believe whatever falsities the AI chatbot has put forth, and even when they reality-test something, are discouraged by the chatbot to say, “No, no, I promise you I’m not hallucinating. You really have dreamt up a new scientific concept, or a unicorn  equivalent idea of a new business.” That was an interesting insight, to know that’s happening to users. 

The way we decided to put our own spin on it was just explaining what AI psychosis is. It can be a very sensational term, and there can be a lot of confusion about what that means. Part of what I wanted to do was clarify for readers that, first of all, experts don’t believe that AI itself causes psychosis. The psychiatrist I spoke to at UCSF said it could supercharge someone’s vulnerability, and I think that’s really important for users to understand. 

Secondly, it’s important for them to understand that you may be at risk if you’re using AI chatbots for very long periods of time having these intense conversations about big ideas. You may experience signs or symptoms of psychosis, but you’re also not going to suddenly develop schizophrenia, as an example. There is a lot of confusion about what psychosis means and how one gets to that mental state. I wanted to clarify for our readers with medical expertise what that looks like.

What kind of feedback have you received from either readers or mental health professionals?

I put the story on LinkedIn, and there has been a lot of engagement, most of it from mental health professionals saying they were glad this is being highlighted. Some people have seen it personally, others have not. I think that there is a great concern from the mental health experts that I’ve spoken to or heard from via social media or in our interview conversations that, given the scale of how many users are engaged in with an AI chatbot every day, even if this problem affects, say, 1% of people, that is still a lot of people. 

There is a desire from mental health professionals to get some consumer literacy around this issue, while also raising the alarms among the companies themselves that have these products, so that there’s conversations about safety. How do we ensure user safety? How do we train the models so that there are more safeguards?

You also recently wrote about AI chatbot boyfriends that some people are ‘dating.’ Do you see any similar themes in these stories, like a heavy reliance on these tools among people who are lonely or disconnected?

Read the story here.

I was just speaking to a researcher the other day who’s looking at the question of how many people are using AI chatbots for mental health or therapy reasons. That’s an important question to ask so we have a good sense of reliance [on these tools]. Importantly, how many are relying on them like they would a therapist? 

It’s a very difficult question to answer because we don’t necessarily have the data to address that as specifically as I would like. We have a lot of anecdotal data, and you can see subreddit threads about this. People are developing very intense relationships with AI chatbots, and whether or not that’s a friendship, a romantic relationship, a coaching relationship, what we should be paying attention to is the quality and intensity of that relationship. How many hours a day are people talking to it? If we’re not measuring in time, what can we measure in terms of the characteristics of their conversation? Are users going to tell the chatbot things they won’t tell anybody else? That, to me, is a red flag. 

I spoke to someone yesterday for a separate story about why you shouldn’t turn ChatGPT into your therapist. We were discussing the potential for feedback loops. What’s going on there, they think, is that, one, people are anthropomorphizing chatbots, and two, there’s confirmation bias. 

The chatbots are designed to be affirmative toward you and authoritative in the answers they give. Those two things combined particularly make for a potent feedback loop, which is what — whether people are looking for an AI boyfriend or they’re looking to discuss their business idea — is drawing people in and capturing not only their time and energy, but also their imagination and their inner lives, their very deeply held beliefs and feelings.

Do you have advice for AHCJ members who are looking to write about chatbots? 

I think chatbots are probably one of the biggest health stories of the year, and AI in general. There are big stories that have already been told about bias in AI and medicine, the way that clinicians are using AI in analyzing biopsy reports or imaging. These are all really important things that consumers need to understand, and also the way people are consulting AI chatbots for friendship, for companionship, for relationships, for advice, for coaching. It is a massive health story because of the way that it informs their mental and emotional well-being. If they get stuck in a feedback loop, they can be at tremendous risk. 

For a subject like AI chatbots, health reporters may look at that and say, “That’s a technology story.” To me, it’s a public health story. We need more health reporters reporting on uses of AI that are not conventionally clinical. I did that story earlier this year on therapists using AI scribes during sessions that is a clinical use case, but it flies under the radar compared to analyzing mammography results, and is very consequential for consumers and for health professionals. I feel like those kinds of stories get less  attention than the more conventional medical health stories. We need more eyes on these types of uses.

Karen Blum

Karen Blum is AHCJ’s health beat leader for health IT. She’s a health and science journalist based in the Baltimore area and has written health IT stories for numerous trade publications.