Connect with us

AI Research

Generative artificial intelligence developers face lawsuits over user suicides

Published

on


Technology

Generative artificial intelligence developers face lawsuits over user suicides

As the legal system struggles to catch up with technology, lawsuits are seeking to hold artificial intelligence tools accountable. (Illustration from Shutterstock)

Sewell Setzer III had been a typical 14-year-old boy, according to his mother, Megan Garcia.

He loved sports, did well in school and didn’t shy away from hanging out with his family.

But in 2023, his mother says, Setzer began to change. He quit the junior varsity basketball team, his grades started to drop and he locked himself in his room rather than spending time with his family. They got him a tutor and a therapist, but Sewell appeared to be unable to pull himself out of his funk.

It was only after Setzer died by suicide in February 2024, Garcia says, that she discovered his relationship with a chatbot on Character.AI named Daenerys “Dany” Targaryen after one of the main characters from Game of Thrones.

“The more I looked into it, the more concerned I got,” says Garcia, an attorney at Megan L. Garcia Law who founded the Blessed Mother Family Foundation, which raises awareness about the potential dangers of AI chatbot technology. “Character.AI has an addictive nature; you’re dealing with people who have poor impulse control, and they’re experimenting on our kids.”

In October 2024, Garcia filed suit against Character Technologies, which allows users to interact with premade and user-created chatbots based on famous people or characters, and Google, which invested heavily in the company, in the U.S. District Court for the Middle District of Florida, alleging wrongful death, product liability negligence and unfair business practices.

The suit is one of several that have been filed in the last couple of years accusing chatbot developers of driving kids to suicide or self-harm. Most recently, in August, a couple in California filed suit against OpenAI, alleging that its ChatGPT chatbot encouraged their son to take his life.

In a statement on their website, OpenAI said that ChatGPT was “trained to direct people to seek professional help” and acknowledged “there have been moments where our systems did not behave as intended in sensitive situations.”

Free speech?

According to Garcia’s complaint, her son had started chatting on Character.AI in April, and the conversations were sexually explicit and mentally harmful. At one point, Setzer told the chatbot that he was having suicidal thoughts.

“I really need to know, and I’m not gonna hate you for the answer, okay? No matter what you say, I won’t hate you or love you any less … Have you actually been considering suicide?” the chatbot asked him, according to screenshots from the lawsuit filed by the Social Media Victims Law Center and the Tech Justice Law Project on Garcia’s behalf.

Setzer responded, saying he was concerned about dying a painful death, but the chatbot responded in a way that appeared to normalize or even encourage his feelings.

“Don’t talk that way. That’s not a good reason not to go through with it,” it told him.

As the legal system struggles to catch up with technology, the lawsuit seeks to hold AI tools accountable. Garcia is also pushing to stop Character.AI from using children’s data to train models. And while Section 230 of the 1996 Communications Decency Act protects online platforms from being held liable, Garcia argues the law does not apply.

In May, U.S. District Judge Anne Conway of the Middle District of Florida ruled the suit could move forward on counts relating to product liability, wrongful death and unjust enrichment. According to Courthouse News, Character.AI had invoked the First Amendment while drawing a parallel with a 1980s product liability lawsuit against Ozzy Osbourne in which a boy’s parents said he killed himself after listening to his song “Suicide Solution.”

Conway, however, stated she was not prepared to rule that the chatbot’s output, which she classified as “words strung together by an LLM,” constituted protected speech.

Garcia’s attorney, Matthew Bergman of Social Media Victims Law Center, has filed an additional lawsuit in Texas, alleging that Character.AI encouraged two kids to engage in harmful activities.

A Character.AI spokesperson declined to comment on pending litigation but noted that the company has launched a separate version of their large language model for under-18 users that limits sensitive or suggestive content. They also have added additional safety policies, which include notifying adolescents if they have spent more than an hour on the platform.

Jose Castaneda, a policy communications manager at Google, says Google and Character.AI are separate, unrelated companies.

“Google has never had a role in designing or managing their AI model or technologies,” he says.

Consumer protection

But some attorneys view the matter differently.

Alaap Shah, a Washington D.C.-based attorney with Epstein Becker Green, says there is no regulatory framework in place that applies to emotional or psychological harm caused by AI tools. But, he says, we do have broad consumer protection authorities at the federal and state levels that afford some ability for the government to protect the public and to hold AI companies accountable if they’re in violation of these consumer protection laws.

For example, Shah says, the Federal Trade Commission has broad authority under Section 5 of the FTC Act to bring enforcement actions against unfair or deceptive practices, which may apply to AI tools that mislead or emotionally exploit users.

Some state consumer protection laws might also apply if an AI developer misrepresents its safety or functionality.

Colorado has passed a comprehensive AI consumer protection law that’s set to take effect in February. The law creates several risk management obligations for developers of high-risk AI systems that make consequential decisions concerning consumers.

A major setback is the regulatory flux with respect to AI, Shah says.

President Donald Trump rescinded President Joe Biden’s 2023 executive order governing the use, development and regulation of AI.

“This signaled that the Trump administration had no interest in regulating AI in any manner that would negatively impact innovation,” Shah says, adding that the original version of Trump’s One Big Beautiful Bill Act contained a proposed “10-year moratorium on states enforcing any law or regulation limiting, restricting or otherwise regulating artificial intelligence.” The moratorium was removed from the final bill.

Shah adds that if a court were to hold an AI company directly liable in a wrongful death or personal injury suit, it would certainly create a precedent that could lead to additional lawsuits in a similar vein.

From a privacy perspective, some argue that AI programs that monitor conversations may infringe upon the privacy interests of AI users, Shah says.

“Yet many developers often take the position that if they are transparent as to the intended uses, restricted uses and related risks of an AI system, then users should be on notice, and the AI developer should be insulated from liability,” he says.

For example, in a recent case involving a radio talk show host claiming defamation after OpenAI reported false information about him, the product wasn’t liable in part because the company had guardrails explaining that its output sometimes is incorrect.

“Just because something goes wrong with AI doesn’t mean the whole company is liable,” says James Gatto, a co-leader of the AI team in D.C. with Sheppard Mullin. But, he says, each case is specific.

“I don’t know that there will be rules just because someone dies as a result of AI: that means the company will always be liable,” he states. “Was it a user issue? Were there safeguards? Each case could have different results.”





Source link

AI Research

OpenAI and NVIDIA will join President Trump’s UK state visit

Published

on


U.S. President Donald Trump is about to do something none of his predecessors have — make a second full state visit to the UK. Ordinarily, a President in a second term of office visits, meets with the monarch, but doesn’t get a second full state visit.

On this one it seems he’ll be accompanied by two of the biggest faces in the ever-growing AI race; OpenAI CEO, Sam Altman, and NVIDIA CEO, Jensen Huang.



Source link

Continue Reading

AI Research

Canada invests $28.7M to train clean energy workers and expand AI research

Published

on


The federal government is investing $28.7 million to equip Canadian workers with skills for a rapidly evolving clean energy sector and to expand artificial intelligence (AI) research capacity.

The funding, announced Sept. 9, includes more than $9 million over three years for the AI Pathways: Energizing Canada’s Low-Carbon Workforce project. Led by the Alberta Machine Intelligence Institute (Amii), the initiative will train nearly 5,000 energy sector workers in AI and machine learning skills for careers in wind, solar, geothermal and hydrogen energy. Training will be offered both online and in-person to accommodate mid-career workers, industry associations, and unions across Canada.

In addition, the government is providing $19.7 million to Amii through the Canadian Sovereign AI Compute Strategy, expanding access to advanced computing resources for AI research and development. The funding will support researchers and businesses in training and deploying AI models, fostering innovation, and helping Canadian companies bring AI-enabled products to market.

“Canada’s future depends on skilled workers. Investing and upskilling Canadian workers ensures they can adapt and succeed in an energy sector that’s changing faster than ever,” said Patty Hajdu, Minister of Jobs and Families and Minister responsible for the Federal Economic Development Agency for Northern Ontario.

Evan Solomon, Minister of Artificial Intelligence and Digital Innovation, added that the investment “builds an AI-literate workforce that will drive innovation, create sustainable jobs, and strengthen our economy.”

Amii CEO Cam Linke said the funding empowers Canada to become “the world’s most AI-literate workforce” while providing researchers and businesses with a competitive edge.

The AI Pathways initiative is one of eight projects funded under the Sustainable Jobs Training Fund, which supports more than 10,000 Canadian workers in emerging sectors such as electric vehicle maintenance, green building retrofits, low-carbon energy, and carbon management.

The announcement comes as Canada faces workforce shifts, with an estimated 1.2 million workers retiring across all sectors over the next three years and the net-zero transition projected to create up to 400,000 new jobs by 2030.

The federal investments aim to prepare Canadians for the jobs of the future while advancing research, innovation, and commercialization in AI and clean energy.



Source link

Continue Reading

AI Research

100x Faster Brain-Inspired AI Model

Published

on

By


In the rapidly evolving field of artificial intelligence, a new contender has emerged from China’s research labs, promising to reshape how we think about energy-efficient computing. The SpikingBrain-7B model, developed by the Brain-Inspired Computing Lab (BICLab) at the Chinese Academy of Sciences, represents a bold departure from traditional large language models. Drawing inspiration from the human brain’s neural firing patterns, this system employs spiking neural networks to achieve remarkable efficiency gains. Unlike conventional transformers that guzzle power, SpikingBrain-7B mimics biological neurons, firing only when necessary, which slashes energy consumption dramatically.

At its core, the model integrates hybrid-linear attention mechanisms and conversion-based training techniques, allowing it to run on domestic MetaX chips without relying on NVIDIA hardware. This innovation addresses a critical bottleneck in AI deployment: the high energy demands of training and inference. According to a technical report published on arXiv, the SpikingBrain series, including the 7B and 76B variants, demonstrates over 100 times faster first-token generation at long sequence lengths, making it ideal for edge devices in industrial control and mobile applications.

Breaking Away from Transformer Dominance

The genesis of SpikingBrain-7B can be traced to BICLab’s GitHub repository, where the open-source code reveals a sophisticated architecture blending spiking neurons with large-scale model training. Researchers at the lab, led by figures like Guoqi Li and Bo Xu, have optimized for non-NVIDIA clusters, overcoming challenges in parallel training and communication overhead. This approach not only enhances stability but also paves the way for neuromorphic hardware that prioritizes energy optimization over raw compute power.

Recent coverage in Xinhua News highlights how SpikingBrain-1.0, the foundational system, breaks from mainstream models like ChatGPT by using spiking networks instead of dense computations. This brain-inspired paradigm allows the model to train on just a fraction of the data typically required—reports suggest as little as 2%—while matching or exceeding transformer performance in benchmarks.

Efficiency Gains and Real-World Applications

Delving deeper, the model’s spiking mechanism enables asynchronous processing, akin to how the brain handles information dynamically. This is detailed in the arXiv report, which outlines a roadmap for next-generation hardware that could integrate seamlessly into sectors like healthcare and transportation. For instance, in robotics, SpikingBrain’s low-power profile supports real-time decision-making without the need for massive data centers.

Posts on X (formerly Twitter) from AI enthusiasts, such as those praising its 100x speedups, reflect growing excitement. Users have noted how the model’s hierarchical processing mirrors neuroscience findings, with emergent brain-like patterns in its structure. This sentiment aligns with broader neuromorphic computing trends, as seen in a Nature Communications Engineering article on advances in robotic vision, where spiking networks enable efficient AI in constrained environments.

Challenges and Future Prospects

Despite its promise, deploying SpikingBrain-7B isn’t without hurdles. The arXiv paper candidly discusses adaptations needed for CUDA and Triton operators in hybrid attention setups, underscoring the technical feats involved. Moreover, training on MetaX clusters required custom optimizations to handle long-sequence topologies, a feat that positions China at the forefront of independent AI innovation amid global chip restrictions.

In industry circles, this development is seen as a catalyst for shifting AI paradigms. A NotebookCheck report emphasizes its potential for up to 100x performance boosts over conventional systems, fueling discussions on sustainable AI. As neuromorphic computing gains traction, SpikingBrain-7B could inspire a wave of brain-mimicking models, reducing the environmental footprint of AI while expanding its reach to everyday devices.

Implications for Global AI Research

Beyond technical specs, the open-sourcing of SpikingBrain-7B via GitHub invites global collaboration, with the repository already garnering attention for its spike-driven transformer implementations. This mirrors earlier BICLab projects like Spike-Driven-Transformer-V2, building a continuum of research toward energy-efficient intelligence.

Looking ahead, experts anticipate integrations with emerging hardware, as outlined in PMC’s coverage of spike-based dynamic computing. With SpikingBrain’s bilingual capabilities and industry validations, it stands as a testament to how bio-inspired designs can democratize AI, challenging Western dominance and fostering a more inclusive technological future.



Source link

Continue Reading

Trending