Curated RSS Brief
ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns
AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News OpenAI Close OpenAI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All OpenAI ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns The feature expands existing teenage safety options to anyone over 18. The feature expands existing teenage safety options to anyone over 18. by Jess Weatherbed Close Jess Weatherbed News Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jess Weatherbed May 7, 2026, 6:00 PM UTC Link Share Gift Image: The Verge Jess Weatherbed Close Jess Weatherbed Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews. OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns. Friends, family members, or caregivers designated as a “Trusted Contact” will be notified if OpenAI detects that a person may have discussed topics like self-harm or suicide with the chatbot. “Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference,” OpenAI said in its announcement. “It offers another layer of support alongside the localized helplines already available in ChatGPT.” The Trusted Contact feature is opt-in. Any adult ChatGPT user can enable it by adding contact details for a fellow adult (18+ globally or 19+ in South Korea) in their ChatGPT account settings. The Trusted Contact must accept the invitation within a week of receiving the request. Users can remove or edit their chosen contact in the settings, and the Trusted Contact can also choose to remove themselves at any time. Related How chatbots are enabling AI psychosis ChatGPT will ‘better detect’ mental distress after reports of it feeding people’s delusions OpenAI says that the notification is “intentionally limited” and will not share chat details or transcripts with the Trusted Contact. If OpenAI’s automated systems detect that a user is talking about harming themselves, ChatGPT will then encourage the user to reach out to their Trusted Contact for help, and let them know the contact may be notified. A “small team of specially trained people” will then review the situation, according to OpenAI, and ChatGPT will send a brief email, text message, or in-app ChatGPT notification to the Trusted Contact if the conversation is determined to indicate serious safety concerns. This builds on the emergency contact feature that was introduced alongside ChatGPT’s parental controls in September, after a 16-year-old took his own life following months of confiding in ChatGPT. Meta has also introduced a similar feature that alerts parents if their kids “repeatedly” search for self-harm topics on Instagram . Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Jess Weatherbed Close Jess Weatherbed News Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jess Weatherbed AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News OpenAI Close OpenAI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All OpenAI Most Popular Most Popular Here’s what Microsoft is offering long-serving employees to voluntarily retire Apple agrees to pay iPhone owners $250 million for not delivering AI Siri Nintendo announces a new Star Fox for the Switch 2 Google shuts down Project Mariner Musk’s biggest loyalist became his biggest liability The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad
- AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed.
- Follow Follow See All AI News Close News Posts from this topic will be added to your daily email digest and your homepage feed.
- Follow Follow See All News OpenAI Close OpenAI Posts from this topic will be added to your daily email digest and your homepage feed.
- Follow Follow See All OpenAI ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns The feature expands existing teenage safety options to anyone over 18.
If you want the exact wording, examples, or full context from the publisher, open the original source article.
Open Original Article
Comments
Post a Comment