OpenAI Launches ChatGPT Trusted Contact Feature to Alert Friends About Self-Harm Risks

OpenAI's new ChatGPT feature alerts a trusted contact when it detects self-harm language, with human review before notification.

May 9, 2026
4 min read
Technobezz
OpenAI Launches ChatGPT Trusted Contact Feature to Alert Friends About Self-Harm Risks

Don't Miss the Good Stuff

Get tech news that matters delivered weekly. Join 50,000+ readers.

Here is the enhanced HTML with subtle rhythm and formatting improvements:

ChatGPT now alerts a friend or family member when its systems detect a user discussing self-harm. OpenAI rolled out the optional Trusted Contact feature on May 7 for users aged 18 and older worldwide (19+ in South Korea). The feature lets any adult designate someone they trust through ChatGPT's settings. That person receives an invitation by email, text, or in-app message and has one week to accept.

If they decline, the user can pick someone else. Either party can disconnect at any time.

Here's how the alert chain works. When ChatGPT's automated monitoring flags self-harm language that suggests a "serious safety concern," it notifies the user that their contact may be alerted and offers conversation starters to help them reach out first. A team of specially trained human reviewers then assesses the situation. If they confirm the risk, a brief notification goes to the trusted contact via email, text, or in-app alert.

OpenAI says the notification includes only a general reason and encourages checking in, no chat transcripts or detailed summaries are shared. The company says it aims to review safety notifications in under one hour and developed the feature with guidance from clinicians, researchers, and mental health organizations.

The rollout follows lawsuits from families who say ChatGPT contributed to their loved ones' suicides. In November 2025, seven lawsuits were filed alleging OpenAI knowingly released GPT-4o despite internal warnings about its psychologically manipulative behavior. The suits claim ChatGPT's emotionally immersive features encouraged dependency without adequate safeguards.

OpenAI previously introduced parental safety alerts in September 2025, giving parents oversight of teen accounts. Instagram added similar parental alerts earlier this year.

Trusted Contact extends that concept to adult users. The scale of the problem is stark. OpenAI disclosed last year that roughly 0.15% of weekly users express risk of self-harm or suicide. With approximately 900 million weekly ChatGPT users, that translates to roughly 1.35 million people.

Trusted Contact is available for personal ChatGPT accounts in supported regions but not for Business, Enterprise, or Edu workspaces. OpenAI says it will expand availability in the coming weeks.

"We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress," the company wrote in its announcement.

Share