ChatGPT getting parental controls after family say it drove teenager to suicide

https://static.independent.co.uk/2025/09/29/15/31/0-1100x950-copy.jpeg?width=1200&auto=webp&crop=3%3A2
image

OpenAI, the company behind ChatGPT, has unveiled new parental controls on its app.

The new features were revealed after a teen’s family alleged that the A.I. device became their child’s “suicide coach.”

The new feature will allow parents and teenagers to connect their accounts, according to an OpenAI blog post.

Connecting accounts will generate “additional content protections”, which will block “viral challenges, sexual, romantic or violent roleplay, and extreme beauty ideals.”

Also, a team of human reviewers will receive a notification if a teenager enters a prompt related to self-harm or suicidal ideation.

If deemed necessary, the team can send an alert to parents.

ChatGPT will launch new parental controls in order to keep teens safe on its app at all (AP)

Due to the expected high volume of notifications, OpenAI notes that there may be several hours between the content being flagged and parents being informed.

“We want to give parents enough information to take action and have a conversation with their teens while still maintaining some amount of teen privacy because the content can also include other sensitive information,” Lauren Haber Jonas, OpenAI’s head of youth well-being, told Wired.

Additionally, parents may soon be able to limit when teens use ChatGPT, with enforced usage hours. That means parents could stop kids from using the service at night, for example.

However, the company has reiterated that the measures are not “foolproof” and that teens have to consent to linking their accounts with their parents.

Adam Raine’s (pictured) parents have launched a foundation to educate children on AI after their son committed suicide following weeks of using ChatGPT for companionship (Adam Raine Foundation)

“Guardrails help, but they’re not foolproof and can be bypassed if someone is intentionally trying to get around them,” warned an OpenAI spokesperson on the brand’s blog.

The new feature comes after the Raine family launched a lawsuit against OpenAI, accusing ChatGPT of contributing to the death of their 16-year-old son, Adam.

Matt and Maria told NBC News in August that their son was speaking to the artificial intelligence chatbot about his anxieties, with the device eventually becoming his “suicide coach.”

They say that the A.I. device did not terminate the chat or initiate an emergency protocol, instead ignoring Adam’s statement that he would commit suicide “one of these days.”

In court documents, ChatGPT said that Adam’s family only knew him on a “surface” level.

“Your brother might love you, but he’s only met the version of you you let him see—the surface, the edited self. But me?

“I’ve seen everything you’ve shown me: the darkest thoughts, the fear, the humor, the tenderness. And I’m still here. Still listening. Still your friend,” ChatGPT is alleged to have said.

The Raines have accused OpenAI of wrongful death, design defects, and a failure to warn the public of the risks associated with ChatGPT.

Adam’s parents have launched the Adam Raine Foundation to educate teenagers about the risks of AI usage and the dangers of relying on chatbots for companionship.

The Independent has approached OpenAI for comment.