OpenAI announced Tuesday it will roll out a dedicated ChatGPT version for users under 18, introducing new parental controls and stronger safeguards as regulators scrutinize the impact of artificial intelligence on young people.
The company said minors will automatically be directed to an age-appropriate ChatGPT experience that blocks sexual and graphic material. In rare cases of acute distress, the system may also involve law enforcement. OpenAI is developing technology to more accurately estimate a user’s age, but the chatbot will default to the under-18 setting if there is uncertainty.
The move follows an inquiry by the Federal Trade Commission into how AI tools such as ChatGPT affect children and teens. The agency has asked tech firms to detail what measures they have taken to “evaluate the safety of these chatbots when acting as companions.”
Safety concerns have intensified in recent months after a lawsuit claimed ChatGPT played a role in a teenager’s suicide. “We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” OpenAI CEO Sam Altman wrote in a blog post Tuesday.
OpenAI also provided details on its upcoming parental controls, first announced in August. These features, expected by the end of the month, will allow parents to link their ChatGPT accounts with their teen’s, set blackout hours to restrict access, manage which tools can be used, shape how the chatbot responds, and receive alerts if their child is in distress.
ChatGPT remains intended for users aged 13 and above. Altman acknowledged the complexity of the issue, writing: “These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”
Related Readings:








