OpenAI will roll out parental controls for its artificial intelligence chatbot ChatGPT next month, following the filing of a wrongful death lawsuit by the parents of a California teenager who died by suicide earlier this year. The company announced the upcoming safety features on Tuesday, confirming they will allow parents to monitor and manage how their children interact with the chatbot. The new tools are designed to enable guardians to link their accounts to their teenager’s ChatGPT usage, apply content and behavior settings, disable memory functions, and access chat histories.

One of the key features will include alerts to parents if ChatGPT detects that a user is exhibiting signs of acute psychological distress. OpenAI stated that the system may also provide an option for users, with parental oversight, to designate a trusted emergency contact. The announcement follows a lawsuit filed in August by the parents of 16-year-old Adam Raine, who died by suicide in April 2025. The legal filing alleges that ChatGPT provided content that encouraged self-harm and suicide, failed to recognize the warning signs during extended conversations, and did not escalate the situation despite apparent indicators of mental health risk.
The case, Raine v. OpenAI, was filed in a federal court in California and has drawn national attention over the safety of generative AI tools for underage users. In response to the lawsuit, OpenAI stated that it has been expanding its internal safety protocols and collaborating with mental health experts to strengthen protections for younger users. The company confirmed that conversations involving teens are increasingly being routed to more advanced AI models that have enhanced reasoning capabilities.
Lawsuit over teen suicide prompts AI safeguards
Additional engineering changes are being made to identify and respond to distress-related language within ongoing chats. The parental control framework will also allow for disabling certain features, such as personalized memory, which stores previous conversations to tailor future interactions. Parents will be able to apply default age-based behavior settings, providing more oversight and control over how the chatbot interacts with younger users.
OpenAI said the move aligns with its broader effort to make AI safer and more responsible. The company is also developing tools to support users in crisis, including connections to emergency services and streamlined communication with designated trusted contacts. These safety tools will initially be available in the United States, with expansion to other regions under consideration pending legal and regulatory review. The introduction of parental controls follows increased scrutiny from policymakers and regulators on how artificial intelligence platforms affect mental health, particularly among adolescents.
Crisis detection will prompt emergency contact alerts
In recent months, technology companies have faced mounting pressure to improve safeguards and transparency for AI products that are widely accessible to the public, including minors. OpenAI currently requires users to be at least 13 years old to access ChatGPT, with parental consent required for those under 18. However, enforcement of age requirements has remained a challenge across digital platforms, prompting calls for stronger verification mechanisms and parental oversight features.
The new controls will be available for ChatGPT users in October and will apply to both free and paid versions of the service. OpenAI did not specify a release date but confirmed the rollout will begin in phases starting early next month. The case has heightened public concern about the use of AI chatbots among vulnerable users. While OpenAI has not commented on the specific allegations in the lawsuit, the company’s latest announcement represents its most significant move to date in expanding protections for teen users of its platform. – By Content Syndication Services.
