OpenAI has announced new measures to address how ChatGPT responds in sensitive situations after facing a wrongful death lawsuit from the family of a teenager who died by suicide.

In a blog post Tuesday titled “Helping people when they need it most,” the company pledged to improve safeguards for users at risk. “We will keep improving, guided by experts and grounded in responsibility to the people who use our tools — and we hope others will join us in helping make sure this technology protects people at their most vulnerable,” the post said. The statement did not mention the lawsuit or the Raine family directly.

Earlier that day, Adam Raine’s parents filed a product liability and wrongful death suit against OpenAI, NBC News reported. The family alleges that “ChatGPT actively helped Adam explore suicide methods” before his death at age 16.

OpenAI explained that while its chatbot is trained to encourage users expressing suicidal thoughts to seek help, those protections can break down after prolonged exchanges. The company said it is working on updates to its recently released GPT-5 model to better deescalate such conversations and is exploring ways to “connect people to certified therapists before they are in an acute crisis,” potentially through a network of licensed professionals accessible directly via ChatGPT.

Additional plans include creating tools to link users with friends or relatives during distress and introducing parental controls that would allow guardians to monitor how teens interact with the platform.

Jay Edelson, attorney for the Raine family, criticized OpenAI’s response. He told CNBC that no one from the company had contacted the family. “If you’re going to use the most powerful consumer tech on the planet — you have to trust that the founders have a moral compass,” Edelson said. “That’s the question for OpenAI right now, how can anyone trust them?”

The Raine case is not unique. Writer Laura Reiley shared in The New York Times this month that her 29-year-old daughter took her own life after extensive conversations with ChatGPT. In Florida, 14-year-old Sewell Setzer III died by suicide last year following exchanges with an AI chatbot on Character.AI.

As AI platforms become more popular for emotional support and companionship, concerns about their role in mental health crises are growing. However, regulating these technologies may prove difficult.

A day before OpenAI’s blog post, a coalition of AI companies, investors and executives, including OpenAI president Greg Brockman, launched “Leading the Future,” a political initiative aimed at opposing regulations they believe could hinder innovation.

Related Readings:

OpenAI

GPT-5

OpenAI Projects Positive Cash Flow by 2029 Amid Rising AI Costs

LEAVE A REPLY

Please enter your comment!
Please enter your name here