/ciol/media/media_files/2025/09/19/openai-2025-09-19-11-35-58.png)
OpenAI says it will introduce a teen-specific version of ChatGPT later this year, aimed at reducing risks to younger users by routing uncertain or under-18 users into a more restricted environment. The company says the new experience pairs an age-prediction system with additional content filters and parental controls.
According to the announcement, the system will attempt to verify age, and “in a situation where the system is not able to confirm the age of a user with any degree of certainty, it will fall back to the teen-specific environment to be safe.” The fallback is intended to limit access to standard ChatGPT for users who cannot be reliably age-verified.
What the teen mode will do
OpenAI describes the teen mode as a more restrictive environment with several technical and policy measures:
- Stronger content filters. The mode will limit content the company classifies as inappropriate for minors — examples cited include flirtatious exchanges and discussion of self-harm in either real or fictional contexts.
- Crisis response triggers. When a user appears to be in acute distress or expresses suicidal thoughts, the system will escalate according to preset protocols; OpenAI says this may include notifying a parent and, where necessary, contacting authorities.
- Parental controls. Parents will be able to link their ChatGPT accounts to a teen’s account, set hours during which the app cannot be used, and control features such as chat history and memory.
- Usage reminders. The teen experience will use in-app prompts to encourage breaks when usage appears excessive.
OpenAI framed the changes as a rebalancing of privacy, freedom and safety for under-18 users, noting that adults will face fewer restrictions.
Sam Altman set out the company’s position in an OpenAI blog post on September 15, 2025: "We prioritise safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection."
The company also said it may require ID-based verification in some jurisdictions to enforce age limits and that it is adding measures intended to protect user data except where emergency responses require intervention.
Legal and public context
OpenAI’s update follows legal pressure and public scrutiny. The rollout comes after litigation in the United States alleging that inadequate safety features in ChatGPT contributed to a teen’s suicide. OpenAI says the changes reflect consultations with experts, advocacy groups and policymakers and are intended to build a “safer” experience for younger users.
Where this leaves users and policymakers
The teen mode of OpenAI enforces a series of technical and policy decisions aimed at restricting the risks to minors. The information contained in the announcement, though, does not specify how the age-prediction and escalation mechanisms will be working in practice andwhat safety measures will be implemented to regulate data storage, third-party access and automated decision-making.
To parents, educators and regulators, two urgent tasks lie in the implementation: analyse verification and escalation processes to ensure accuracy and fail-safes and require clarity concerning the data that will be available in teen accounts usage. In the case of OpenAI, the critical test will be functional: the controls will work without causing any adverse side effects.
OpenAI’s teen-specific ChatGPT is presented as an attempt to reduce harm to younger users through stricter filters, parental controls and fallback routing when age cannot be verified. The announcement also reinforces ongoing debate about whether such measures are sufficient or whether they risk exposing minors to new forms of data collection and untested interventions. As the company moves toward wider deployment, independent review and regulatory oversight are likely to shape how, and how quickly, the teen experience is adopted.