OpenAI activates a trusted contact for crisis cases

Published on May 10, 2026 | Translated from Spanish

OpenAI has implemented a feature in ChatGPT called Trusted Contact, aimed at adult users. This tool allows you to designate a trusted person who will receive an alert if the system detects signs of risk of self-harm or suicide. The measure seeks to offer an additional support channel in critical situations.

A ChatGPT interface shows an alert message, along with a trusted contact icon and a red phone.

How the preventive alert system works in ChatGPT 🛡️

The Trusted Contact feature is activated through an analysis of conversational patterns that the model performs in real time. When the system identifies a high level of risk, it sends a notification to the designated contact with information about the situation, without revealing the full content of the conversation. The user must configure this contact from the account settings and can modify it at any time. Data privacy is maintained under OpenAI's security protocols.

Your friend ChatGPT now also calls your parents 😅

Because it wasn't enough that the AI reminded you of your grammatical errors or told you that business idea is terrible. Now, if you get too dramatic with the chatbot, it can notify your trusted contact. Watch out, you might accidentally set up your ex and end up receiving a text message while trying to write a sad poem. Technology advances, but drama remains universal.