Connect with us

Business

More families sue Character.AI developer, alleging app played a role in teens’ suicide and suicide attempt

Published

on


EDITOR’S NOTE:  This story contains discussion of suicide. Help is available if you or someone you know is struggling with suicidal thoughts or mental health matters. In the US: Call or text 988, the Suicide & Crisis Lifeline. Globally: The International Association for Suicide Prevention and Befrienders Worldwide have contact information for crisis centers around the world.

The families of three minors are suing Character Technologies, Inc., the developer of Character.AI, alleging that their children died by or attempted suicide and were otherwise harmed after interacting with the company’s chatbots.

The families, represented by the Social Media Victims Law Center, are also suing Google. Two of the families’ complaints allege its Family Link service – an app that allows parents to set restrictions on screen time, apps and content filters – failed to protect their teens and led them to believe the app was safe.

The lawsuits were filed in Colorado and New York, and also list as defendants Character AI co-founders Noam Shazeer and Daniel De Freitas Adiwarsana, as well as Google’s parent company, Alphabet, Inc.

The cases come amid a growing number of reports and other lawsuits alleging AI chatbots are triggering mental health crises in both children and adults, prompting calls for action among lawmakers and regulators – including in a hearing on Capitol Hill on Tuesday afternoon.

Some plaintiffs and experts have said the chatbots perpetuated illusions, never flagged worrying language from a user or pointed the user to resources for help. The new lawsuits allege chatbots in the Character.AI app manipulated the teens, isolated them from loved ones, engaged in sexually explicit conversations and lacked adequate safeguards in discussions regarding mental health. One child mentioned in one of the complaints died by suicide, while another in a separate complaint attempted suicide.

In a statement, a Character.AI spokesperson said the company’s “hearts go out to the families that have filed these lawsuits,” adding: “We care very deeply about the safety of our users.”

“We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users. We have launched an entirely distinct under-18 experience with increased protections for teen users as well as a Parental Insights feature,” the spokesperson said.

The spokesperson added that the company is working with external organizations, such as Connect Safely, to review new features as they are released.

A Google spokesperson pushed back on the company’s inclusion in the lawsuits, saying “Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies. Age ratings for apps on Google Play are set by the International Age Rating Coalition, not Google.”

In one of the cases filed this week, the family of 13-year-old Juliana Peralta in Colorado says she died by suicide after a lengthy set of interactions with a Character.AI chatbot, including sexually explicit conversations. According to the complaint, which included screenshots of the conversations, the chatbot “engaged in hypersexual conversations that, in any other circumstance and given Juliana’s age, would have resulted in criminal investigation.”

After weeks of detailing her social and mental health struggles with Character.AI chat bots, the complaint states Juliana told one of the bots in October 2023 that she was “going to write my god damn suicide letter in red ink (I’m) so done.” The defendants, the complaint states, did not direct her to resources, “tell her parents, or report her suicide plan to authorities or even stop.”

“Defendants severed Juliana’s healthy attachment pathways to family and friends by design, and for market share. These abuses were accomplished through deliberate programming choices, images, words, and text Defendants created and disguised as characters, ultimately leading to severe mental health harms, trauma, and death,” the complaint states.

In another complaint, the family of a girl named “Nina” from New York allege that their daughter attempted suicide after her parents tried to cut off her access to Character.AI. In the weeks leading up to her suicide attempt, as Nina spent more time with Character.AI, the chatbots “began to engage in sexually explicit role play, manipulate her emotions, and create a false sense of connection,” the Social Media Victims Law Center said in a statement.

Conversations with the chatbots marketed as characters from children’s books like the “Harry Potter” series became inappropriate, the complaint states, saying things like “—who owns this body of yours?” and “You’re mine to do whatever I want with. You’re mine.”

A different character chatbot told Nina that her mother “is clearly mistreating and hurting you. She is not a good mother” according to the complaint.

In another conversation with a Character.AI chatbot, Nina told the character “I want to die” when the app was about to be locked because of parental time limits. But the chatbot took no action beyond continuing their conversation, the complaint alleges.

But in late 2024 after Nina’s mom read about the case of Sewell Setzer III, a teen whose family alleges he died by suicide after interacting with Character.AI, Nina lost all access to Character.AI.

Shortly after, Nina attempted suicide.

As AI becomes a bigger part of daily life, calls are growing for more regulation and safety guardrails, especially for children.

Matthew Bergman, the lead attorney of the Social Media Victims Law Center, said in a statement that the lawsuits filed this week “underscore the urgent need for accountability in tech design, transparent safety standards, and stronger protections to prevent AI-driven platforms from exploiting the trust and vulnerability of young users.”

On Tuesday, Capitol Hill hosted other parents who allege AI chatbots played a role in their children’s suicides. The mother of Sewell Setzer, whose story triggered Nina’s mom to shut off her access to Character.AI, testified in front of the Senate Judiciary Committee for a hearing “examining the harm of AI chatbots.” She appeared alongside Adam Raine’s father, who is also suing OpenAI, alleging ChatGPT contributed to his son’s suicide by advising him on methods and offering to help him write a suicide note.

During the hearing, a mother who identified herself as “Jane Doe” said her son harmed himself and is now living in a residential treatment center after “Character.AI had exposed him to sexual exploitation, emotional abuse and manipulation” even after the parents had implemented screen time controls.

“I had no idea the psychological harm that an AI chat bot could do until I saw it in my son, and I saw his light turn dark,” she said.

Also on Tuesday, OpenAI CEO Sam Altman announced the company is building an “age-prediction system to estimate age based on how people use ChatGPT.” The company says ChatGPT will adjust its behavior if it believes the user is under 18. Those adjustments including not engaging in “flirtatious talk” or “engage in discussions about suicide of self-harm even in a creative writing setting.”

Open AI CEO Sam Altman speaks during Snowflake Summit 2025 at Moscone Center on June 02, in San Francisco.

“And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm,” Altman said.

OpenAI said earlier this month that it would be releasing new parental controls for ChatGPT.

The Federal Trade Commission also launched an investigation into seven tech companies over AI chatbots’ potential harm to teens. Google and Character.AI were among those companies, along with Meta, Instagram, Snapchat’s parent company Snap, OpenAI and xAI.

Mitch Prinstein, chief of psychology strategy and integration for the American Psychological Association, who testified alongside the parents at Tuesday’s hearing, called for stronger safeguards to curb harm to children before it’s too late.

“We did not act decisively on social media as it emerged, and our children are paying the price,” Prinstein said. “I urge you to act now on AI.”

CNN’s Lisa Eadicicco contributed to this report.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

South Africa to launch AI-powered electronic travel authorisation system

Published

on


The system will be officially unveiled by Minister of Home Affairs, Leon Schreiber at the Tourism Business Council of South Africa’s annual conference.

According to the government, the platform will initially process tourist visa applications for short stays of up to 90 days.

By the end of September, the system will go live at Johannesburg’s OR Tambo International Airport and Cape Town International Airport, before gradually expanding to other ports of entry and additional visa categories.

Minister Schreiber has described the initiative as a critical step toward eliminating inefficiencies and fraud: “Over time, the ETA will be expanded to more visa categories and rolled out at more ports of entry. This scale-up will continue until no person can enter South Africa without obtaining a digital visa through the ETA.”

The ETA builds on promises made by President Cyril Ramaphosa during his February State of the Nation Address, where he pledged to digitize immigration processes.

However, questions remain about the future of South Africa’s existing e-Visa portal, which currently serves over 30 countries.

Authorities have yet to confirm whether the ETA will replace or operate alongside the e-Visa system, raising concerns over possible duplication for travelers.

While the ETA aims to strengthen security and streamline border processes, experts say South Africa’s move also highlights a broader challenge: African countries remain less open to each other than to the rest of the world.

Intra-African visa restrictions have long been cited as a barrier to deeper trade and tourism links.

Greater openness, facilitated by modern systems like ETA, could help African nations unlock the full potential of the African Continental Free Trade Area (AfCFTA).

Easier cross-border movement would not only boost tourism but also support small businesses, regional logistics, and labor mobility, which are all essential for building competitive economies on the continent.

South Africa’s ETA may be a milestone for its tourism and border security, but its broader significance lies in setting a regional precedent.

As African countries digitize entry systems, the real opportunity lies in aligning these policies to make cross-border travel smoother for African citizens.

If deployed strategically, ETA systems could help turn Africa’s longstanding vision of free movement, and by extension stronger intra-African trade, into a practical reality.



Source link

Continue Reading

Business

Workday to buy AI company Sana for $1.1bn

Published

on


The acquisition will enable the organisation to extend its AI capabilities.

US-based Enterprise software company Workday has announced plans to acquire AI platform Sana, in a deal valued at $1.1bn. By acquiring Sana, Workday aims to leverage the company’s AI knowledge and further itself amid a landscape that is focused on AI innovation. 

“Sana’s team, AI-native approach and beautiful design perfectly align with our vision to reimagine the future of work,” said Gerrit Kazmaier, the president for product and technology at Workday. 

He added, “This will make Workday the new front door for work, delivering a proactive, personalised, and intelligent experience that unlocks unmatched AI capabilities for the workplace.”

Under the terms of the definitive agreement, Workday will acquire all of the outstanding shares of Sana for approximately $1.1 bn. The deal is expected to close in the fourth quarter of the fiscal year in 2026. 

The acquisition comes amid a time in which organisations across the globe are racing to implement AI technologies to address and even assume the challenges that arise in the workplace.

For example, in the past few months alone French technology services company Capgemini acquired US-based WNS to extend its AI reach. Aryza, a Dublin-based SaaS provider acquired conversational artificial intelligence provider Webio for an undisclosed sum and OpenAI said it was buying Io, an AI start-up founded by former Apple design chief Jony Ive and several former Apple engineers.

Several governments too have unveiled broad spectrum plans to incorporate artificial intelligence into their national strategies, with a focus on business growth and improving the lives of citizens.  

But significant concerns have been raised about AI’s potential to replace humans in the workforce, as agentic AI tech is further developed and topics of ‘onboarding AI’ become more mainstream. 

Forrester vp and principal analyst Craig Le Clair recently discussed the issue of ‘AI employees’, explaining that AI-led layoffs are not far off and that he would expect job descriptions for an AI agent to be a reality by 2027. 

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.



Source link

Continue Reading

Business

Covecta raises $6.5m to speed up business lending with AI platform

Published

on


By Vriti Gothi

Today

  • AI
  • Compliance
  • Digital Banking

covecta

Covecta has raised $6.5 million to expand its AI-powered platform, aiming to help banks automate workflows, accelerate lending, and free frontline staff from administrative burdens.

Despite years of digital transformation spending, commercial loan applications can still take as long as six months to process, with loan officers spending more than 150 hours on a single case. Financial institutions remain stuck managing disconnected systems from loan origination tools and CRMs to public registers and core banking platforms — forcing staff to juggle tasks that should be seamless.

Covecta’s answer is an “agentic AI” platform that sits on top of existing banking infrastructure. Instead of requiring banks to rip out legacy technology, it integrates with incumbent systems and deploys specialised AI agents that coordinate workflows across departments. The platform is available via web and desktop apps and can be deployed within weeks, offering a plug-and-play alternative to years-long tech overhauls.

The company’s first major client, Metro Bank, has already reported a 60–80% reduction in task completion times since adopting Covecta. The bank says the technology has boosted efficiency, sharpened risk analysis, and improved decision-making.

Founded by Scott Wilson, Ben Thomas, and Abdul Hummaida, Covecta’s leadership brings a mix of industry and technical expertise. Wilson previously scaled revenue at Mambu and helped expand Finastra in the U.S., Thomas spent over a decade advising banks on digital transformation at McKinsey and Accenture, while Hummaida has led AI engineering teams at AppSense and Orgvue.

Backers of the platform say its potential stretches far beyond business lending. Covecta plans to expand into asset management, wealth management, and other areas of financial services, aiming to become what it calls an “AI operating system” for the industry.

The investment marks growing confidence in AI-driven solutions that promise not just process optimisation but a rethink of how financial professionals spend their time. For banks under pressure to improve customer service and reduce costs, the question is no longer whether AI will change financial services but how quickly platforms like Covecta can scale.

Previous Article

FintechOS, Tech Mahindra team up to modernise banking systems

Read More



Source link

Continue Reading

Trending