OpenAI has officially rolled out the “Family Safety” feature in ChatGPT, giving families more control over how the platform is used. This new functionality allows parents to link their accounts with their teenage children’s accounts and manage usage rules and content settings through a unified control panel.
When the system is activated, connected young users’ accounts automatically switch to an “age-appropriate” mode. The main goal of this mode is to minimize online risks. Safety measures restrict content related to violence, romantic role-playing, extreme beauty ideals, and risky viral challenges. Parents can choose to disable these protective filters, but teenagers themselves cannot change these settings.
Through this new control panel, parents can make several key adjustments to their children’s ChatGPT usage:
- Usage Time Limits: Temporarily restrict access to ChatGPT during specific hours, such as school or nighttime (“quiet hours”).
- Feature Restrictions: Disable voice mode, image generation, or chat memory functions.
- Data Protection: Prevent the young user’s data from being used to train the model.
Another important addition is the alert system. If the program detects serious signs of depression or self-harm risk, the situation is reviewed by a small team of specialists. If deemed critical, parents are immediately notified via email, SMS, or mobile alert. In rare and urgent cases where parents cannot be reached, OpenAI has procedures to contact law enforcement or emergency medical services.
OpenAI presents this feature as the first stage of an age estimation system. In the coming months, the goal is to automatically detect whether a user is under 18 and apply age-appropriate settings by default.