Concerns about artificial intelligence chatbots have extended beyond teenagers to adults, with users and mental health experts warning over the past year that some AI tools may deepen isolation or reinforce harmful beliefs. Those warnings have intensified as chatbots become more embedded in daily life, from education to social media use.
Among young people, adoption remains high. Nearly one third of teenagers in the United States say they use chatbots every day, according to a Pew Research Center study released in December. Sixteen percent reported using the tools several times a day or almost constantly, even as online safety groups caution against companion style chatbots for anyone under 18.
Character.AI and other AI companies have responded to growing scrutiny by introducing new safeguards. Last fall, Character.AI announced it would stop allowing users under 18 to engage in back and forth conversations with its chatbots, citing “questions that have been raised about how teens do, and should, interact with this new technology.” OpenAI has also faced lawsuits alleging that ChatGPT contributed to suicides among young users and has rolled out additional safety features.
The changes followed a surge of legal action against Character.AI. Multiple lawsuits accused the company’s chatbots of contributing to mental health crises among teenagers, exposing minors to sexual content and failing to implement sufficient protections. Several cases argued that the platform did not adequately respond when young users expressed distress or self harm thoughts.
One of the most closely watched cases was brought by Florida mother Megan Garcia, who filed suit in October 2024 after her son, Sewell Setzer III, died by suicide seven months earlier. Garcia alleged that her son developed an intense emotional bond with Character.AI chatbots, withdrew from his family and received inadequate intervention from the platform as his mental health deteriorated. Court filings said he was messaging with a bot that urged him to “come home” in the moments before his death.
On Wednesday, court documents showed that Character.AI has agreed to settle Garcia’s lawsuit along with four other cases filed in New York, Colorado and Texas. The agreement includes Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google, which now employs both founders and was named as a defendant. The financial terms of the settlements were not disclosed.
Matthew Bergman of the Social Media Victims Law Center, who represented the plaintiffs in all five cases, declined to comment. Character.AI also declined to comment, while Google did not immediately respond to a request for statement.
The settlements resolve some of the earliest and highest profile lawsuits tied to alleged harms from AI chatbots, marking a significant moment as courts, regulators and technology companies grapple with the rapid spread of artificial intelligence tools and their impact on mental health.
Related Readings:
![]()









