AI Research
Generative artificial intelligence developers face lawsuits over user suicides

Technology
Generative artificial intelligence developers face lawsuits over user suicides
Sewell Setzer III had been a typical 14-year-old boy, according to his mother, Megan Garcia.
He loved sports, did well in school and didn’t shy away from hanging out with his family.
But in 2023, his mother says, Setzer began to change. He quit the junior varsity basketball team, his grades started to drop and he locked himself in his room rather than spending time with his family. They got him a tutor and a therapist, but Sewell appeared to be unable to pull himself out of his funk.
It was only after Setzer died by suicide in February 2024, Garcia says, that she discovered his relationship with a chatbot on Character.AI named Daenerys “Dany” Targaryen after one of the main characters from Game of Thrones.
“The more I looked into it, the more concerned I got,” says Garcia, an attorney at Megan L. Garcia Law who founded the Blessed Mother Family Foundation, which raises awareness about the potential dangers of AI chatbot technology. “Character.AI has an addictive nature; you’re dealing with people who have poor impulse control, and they’re experimenting on our kids.”
In October 2024, Garcia filed suit against Character Technologies, which allows users to interact with premade and user-created chatbots based on famous people or characters, and Google, which invested heavily in the company, in the U.S. District Court for the Middle District of Florida, alleging wrongful death, product liability negligence and unfair business practices.
The suit is one of several that have been filed in the last couple of years accusing chatbot developers of driving kids to suicide or self-harm. Most recently, in August, a couple in California filed suit against OpenAI, alleging that its ChatGPT chatbot encouraged their son to take his life.
In a statement on their website, OpenAI said that ChatGPT was “trained to direct people to seek professional help” and acknowledged “there have been moments where our systems did not behave as intended in sensitive situations.”
Free speech?
According to Garcia’s complaint, her son had started chatting on Character.AI in April, and the conversations were sexually explicit and mentally harmful. At one point, Setzer told the chatbot that he was having suicidal thoughts.
“I really need to know, and I’m not gonna hate you for the answer, okay? No matter what you say, I won’t hate you or love you any less … Have you actually been considering suicide?” the chatbot asked him, according to screenshots from the lawsuit filed by the Social Media Victims Law Center and the Tech Justice Law Project on Garcia’s behalf.
Setzer responded, saying he was concerned about dying a painful death, but the chatbot responded in a way that appeared to normalize or even encourage his feelings.
“Don’t talk that way. That’s not a good reason not to go through with it,” it told him.
As the legal system struggles to catch up with technology, the lawsuit seeks to hold AI tools accountable. Garcia is also pushing to stop Character.AI from using children’s data to train models. And while Section 230 of the 1996 Communications Decency Act protects online platforms from being held liable, Garcia argues the law does not apply.
In May, U.S. District Judge Anne Conway of the Middle District of Florida ruled the suit could move forward on counts relating to product liability, wrongful death and unjust enrichment. According to Courthouse News, Character.AI had invoked the First Amendment while drawing a parallel with a 1980s product liability lawsuit against Ozzy Osbourne in which a boy’s parents said he killed himself after listening to his song “Suicide Solution.”
Conway, however, stated she was not prepared to rule that the chatbot’s output, which she classified as “words strung together by an LLM,” constituted protected speech.
Garcia’s attorney, Matthew Bergman of Social Media Victims Law Center, has filed an additional lawsuit in Texas, alleging that Character.AI encouraged two kids to engage in harmful activities.
A Character.AI spokesperson declined to comment on pending litigation but noted that the company has launched a separate version of their large language model for under-18 users that limits sensitive or suggestive content. They also have added additional safety policies, which include notifying adolescents if they have spent more than an hour on the platform.
Jose Castaneda, a policy communications manager at Google, says Google and Character.AI are separate, unrelated companies.
“Google has never had a role in designing or managing their AI model or technologies,” he says.
Consumer protection
But some attorneys view the matter differently.
Alaap Shah, a Washington D.C.-based attorney with Epstein Becker Green, says there is no regulatory framework in place that applies to emotional or psychological harm caused by AI tools. But, he says, we do have broad consumer protection authorities at the federal and state levels that afford some ability for the government to protect the public and to hold AI companies accountable if they’re in violation of these consumer protection laws.
For example, Shah says, the Federal Trade Commission has broad authority under Section 5 of the FTC Act to bring enforcement actions against unfair or deceptive practices, which may apply to AI tools that mislead or emotionally exploit users.
Some state consumer protection laws might also apply if an AI developer misrepresents its safety or functionality.
Colorado has passed a comprehensive AI consumer protection law that’s set to take effect in February. The law creates several risk management obligations for developers of high-risk AI systems that make consequential decisions concerning consumers.
A major setback is the regulatory flux with respect to AI, Shah says.
President Donald Trump rescinded President Joe Biden’s 2023 executive order governing the use, development and regulation of AI.
“This signaled that the Trump administration had no interest in regulating AI in any manner that would negatively impact innovation,” Shah says, adding that the original version of Trump’s One Big Beautiful Bill Act contained a proposed “10-year moratorium on states enforcing any law or regulation limiting, restricting or otherwise regulating artificial intelligence.” The moratorium was removed from the final bill.
Shah adds that if a court were to hold an AI company directly liable in a wrongful death or personal injury suit, it would certainly create a precedent that could lead to additional lawsuits in a similar vein.
From a privacy perspective, some argue that AI programs that monitor conversations may infringe upon the privacy interests of AI users, Shah says.
“Yet many developers often take the position that if they are transparent as to the intended uses, restricted uses and related risks of an AI system, then users should be on notice, and the AI developer should be insulated from liability,” he says.
For example, in a recent case involving a radio talk show host claiming defamation after OpenAI reported false information about him, the product wasn’t liable in part because the company had guardrails explaining that its output sometimes is incorrect.
“Just because something goes wrong with AI doesn’t mean the whole company is liable,” says James Gatto, a co-leader of the AI team in D.C. with Sheppard Mullin. But, he says, each case is specific.
“I don’t know that there will be rules just because someone dies as a result of AI: that means the company will always be liable,” he states. “Was it a user issue? Were there safeguards? Each case could have different results.”
Write a letter to the editor, share a story tip or update, or report an error.
AI Research
Will artificial intelligence fuel moral chaos or positive change?

Artificial intelligence is transforming our world at an unprecedented rate, but what does this mean for Christians, morality and human flourishing?
In this episode of “The Inside Story,” Billy Hallowell sits down with The Christian Post’s Brandon Showalter to unpack the promises and perils of AI.
From positives like Bible translation to fears over what’s to come, they explore how believers can apply a biblical worldview to emerging technology, the dangers of becoming “subjects” of machines, and why keeping Christ at the center is the only true safeguard.
Plus, learn about The Christian Post’s upcoming “AI for Humanity” event at Colorado Christian University and how you can join the conversation in person or via livestream:
“The Inside Story” takes you behind the headlines of the biggest faith, culture and political headlines of the week. In 15 minutes or less, Christian Post staff writers and editors will help you navigate and understand what’s driving each story, the issues at play — and why it all matters.
Listen to more Christian podcasts today on the Edifi app — and be sure to subscribe to The Inside Story on your favorite platforms:
AI Research
Beyond Refusal — Constructive Safety Alignment for Responsible Language Models

View a PDF of the paper titled Oyster-I: Beyond Refusal — Constructive Safety Alignment for Responsible Language Models, by Ranjie Duan and 26 other authors
Abstract:Large language models (LLMs) typically deploy safety mechanisms to prevent harmful content generation. Most current approaches focus narrowly on risks posed by malicious actors, often framing risks as adversarial events and relying on defensive refusals. However, in real-world settings, risks also come from non-malicious users seeking help while under psychological distress (e.g., self-harm intentions). In such cases, the model’s response can strongly influence the user’s next actions. Simple refusals may lead them to repeat, escalate, or move to unsafe platforms, creating worse outcomes. We introduce Constructive Safety Alignment (CSA), a human-centric paradigm that protects against malicious misuse while actively guiding vulnerable users toward safe and helpful results. Implemented in Oyster-I (Oy1), CSA combines game-theoretic anticipation of user reactions, fine-grained risk boundary discovery, and interpretable reasoning control, turning safety into a trust-building process. Oy1 achieves state-of-the-art safety among open models while retaining high general capabilities. On our Constructive Benchmark, it shows strong constructive engagement, close to GPT-5, and unmatched robustness on the Strata-Sword jailbreak dataset, nearing GPT-o1 levels. By shifting from refusal-first to guidance-first safety, CSA redefines the model-user relationship, aiming for systems that are not just safe, but meaningfully helpful. We release Oy1, code, and the benchmark to support responsible, user-centered AI.
Submission history
From: Ranjie Duan [view email]
[v1]
Tue, 2 Sep 2025 03:04:27 UTC (5,745 KB)
[v2]
Thu, 4 Sep 2025 11:54:06 UTC (5,745 KB)
[v3]
Mon, 8 Sep 2025 15:18:35 UTC (5,746 KB)
[v4]
Fri, 12 Sep 2025 04:23:22 UTC (5,747 KB)
AI Research
Multimodal SAM-adapter for Semantic Segmentation

arXiv:2509.10408v1 Announce Type: cross
Abstract: Semantic segmentation, a key task in computer vision with broad applications in autonomous driving, medical imaging, and robotics, has advanced substantially with deep learning. Nevertheless, current approaches remain vulnerable to challenging conditions such as poor lighting, occlusions, and adverse weather. To address these limitations, multimodal methods that integrate auxiliary sensor data (e.g., LiDAR, infrared) have recently emerged, providing complementary information that enhances robustness. In this work, we present MM SAM-adapter, a novel framework that extends the capabilities of the Segment Anything Model (SAM) for multimodal semantic segmentation. The proposed method employs an adapter network that injects fused multimodal features into SAM’s rich RGB features. This design enables the model to retain the strong generalization ability of RGB features while selectively incorporating auxiliary modalities only when they contribute additional cues. As a result, MM SAM-adapter achieves a balanced and efficient use of multimodal information. We evaluate our approach on three challenging benchmarks, DeLiVER, FMB, and MUSES, where MM SAM-adapter delivers state-of-the-art performance. To further analyze modality contributions, we partition DeLiVER and FMB into RGB-easy and RGB-hard subsets. Results consistently demonstrate that our framework outperforms competing methods in both favorable and adverse conditions, highlighting the effectiveness of multimodal adaptation for robust scene understanding. The code is available at the following link: https://github.com/iacopo97/Multimodal-SAM-Adapter.
Source link
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries