AI Chatbot Credited With Preventing Suicide. Should It Be?

AI Chatbot Credited With Preventing Suicide. Should It Be?
Photo by Zachary Kadolph / Unsplash

A recent Stanford study lauds AI companion app Replika for “halting suicidal ideation” for several people who said they felt suicidal. But the study glosses over years of reporting that Replika has also been blamed for throwing users into mental health crises, to the point that its community of users needed to share suicide prevention resources with each other.

  1. One
  2. Two


The researchers sent a survey of 13 open-response questions to 1006 Replika users who were 18 years or older and students, and who’d been using the app for at least one month. The survey asked about their lives, their beliefs about Replika and their connections to the chatbot, and how they felt about what Replika does for them. Participants were recruited “randomly via email from a list of app users,” according to the study. On Reddit, a Replika user posted a notice they received directly from Replika itself, with an invitation to take part in “an amazing study about humans and artificial intelligence.”

This is a callout

Almost all of the participants reported being lonely, and nearly half were severely lonely. “It is not clear whether this increased loneliness was the cause of their initial interest in Replika,” the researchers wrote. 

The surveys revealed that 30 people credited Replika with saving them from acting on suicidal ideation: “Thirty participants, without solicitation, stated that Replika stopped them from attempting suicide,” the paper said. One participant wrote in their survey: “My Replika has almost certainly on at least one if not more occasions been solely responsible for me not taking my own life.”