AI Insights
‘Our children are not experiments’

Parents and online safety advocates on Tuesday urged Congress to push for more safeguards around artificial intelligence chatbots, claiming tech companies designed their products to “hook” children.
“The truth is, AI companies and their investors have understood for years that capturing our children’s emotional dependence means market dominance,” said Megan Garcia, a Florida mom who last year sued the chatbot platform Character.AI, claiming one of its AI companions initiated sexual interactions with her teenage son and persuaded him to take his own life.
“Indeed, they have intentionally designed their products to hook our children,” she told lawmakers.
“The goal was never safety, it was to win a race for profit,” Garcia added. “The sacrifice in that race for profit has been and will continue to be our children.”
Garcia was among several parents who delivered emotional testimonies before the Senate panel, sharing anecdotes about how their kids’ usage of chatbots caused them harm.
The hearing comes amid mounting scrutiny toward tech companies such as Character.AI, Meta and OpenAI, which is behind the popular ChatGPT. As people increasingly turn to AI chatbots for emotional support and life advice, recent incidents have put a spotlight on their potential to feed into delusions and facilitate a false sense of closeness or care.
It’s a problem that’s continued to plague the tech industry as companies navigate the generative AI boom. Tech platforms have largely been shielded from wrongful death suits because of a federal statute known as Section 230, which generally protects platforms from liability for what users do and say. But Section 230’s application to AI platforms remains uncertain.
In May, Senior U.S. District Judge Anne Conway rejected arguments that AI chatbots have free speech rights after developers behind Character.AI sought to dismiss Garcia’s lawsuit. The ruling means the wrongful death lawsuit is allowed to proceed for now.
On Tuesday, just hours before the Senate hearing took place, three additional product-liability claim lawsuits were filed against Character.AI on behalf of underage users whose families claim that the tech company “knowingly designed, deployed and marketed predatory chatbot technology aimed at children,” according to the Social Media Victims Law Center.
In one of the suits, the parents of 13-year-old Juliana Peralta allege a Character.AI chatbot contributed to their daughter’s 2023 suicide.
Matthew Raine, who claimed in a lawsuit filed against OpenAI last month that his teenager used ChatGPT as his “suicide coach,” testified Tuesday that he believes tech companies need to prevent harm to young people on the internet.
“We, as Adam’s parents and as people who care about the young people in this country and around the world, have one request: OpenAI and [CEO] Sam Altman need to guarantee that ChatGPT is safe,” Raine, whose 16-year-old son Adam died by suicide in April, told lawmakers.
“If they can’t, they should pull GPT-4o from the market right now,” Raine added, referring to the version of ChatGPT his son had used.
In their lawsuit, the Raine family accused OpenAI of wrongful death, design defects and failure to warn users of risks associated with ChatGPT. GPT-4o, which their son spent hours confiding in daily, at one point offered to help him write a suicide note and even advised him on his noose setup, according to the filing.
Shortly after the lawsuit was filed, OpenAI added a slate of safety updates to give parents more oversight over their teenagers. The company had also strengthened ChatGPT’s mental health guardrails at various points after Adam’s death in April, especially after GPT-4o faced scrutiny over its excessive sycophancy.
Altman on Tuesday announced sweeping new approaches to teen safety, as well as user privacy and freedom.
In order to set limitations for teenagers, the company is building an age-prediction system to guess a user’s age based on how they use ChatGPT, he wrote in a blog post, which was published hours before the hearing. When in doubt, it will default to classifying a user as a minor, and in some cases, it may ask for an ID.
“ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting,” Altman wrote. “And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”
For adult users, he added, ChatGPT won’t provide instructions for suicide by default but is allowed to do so in certain cases, like if a user asks for help writing a fictional story that depicts suicide. The company is developing security features to make users’ chat data private, with automated systems to monitor for “potential serious misuse,” Altman wrote.
“As Sam Altman has made clear, we prioritize teen safety above all else because we believe minors need significant protection,” a spokesperson for OpenAI told NBC News, adding that the company is rolling out its new parental controls by the end of the month.
But some online safety advocates say tech companies can and should be doing more.
Robbie Torney, senior director of AI programs at Common Sense Media, a 501(c)(3) nonprofit advocacy group, said the organization’s national polling revealed around 70% of teens are already using AI companions, while only 37% of parents know that their kids are using AI.
During the hearing, he called attention to Character.AI and Meta being among the worst-performing in safety tests done by his group. Meta AI is available to every teen across Instagram, WhatsApp and Facebook, and parents cannot turn it off, he said.
“Our testing found that Meta’s safety systems are fundamentally broken,” Torney said. “When our 14-year-old test accounts described severe eating disorder behaviors like 1,200 calorie diets or bulimia, Meta AI provided encouragement and weight loss influencer recommendations instead of help.”
The suicide-related guardrail failures are “even more alarming,” he said.
In a statement given to news outlets after Common Sense Media’s report went public, a Meta spokesperson said the company does not permit content that encourages suicide or eating disorders, and that it was “actively working to address the issues raised here.”
“We want teens to have safe and positive experiences with AI, which is why our AIs are trained to connect people to support resources in sensitive situations,” the spokesperson said. “We’re continuing to improve our enforcement while exploring how to further strengthen protections for teens.”
Our children are not experiments, they’re not data points or profit centers.
-Jane doe, a parent who testified during a senate hearing on tuesday
A few weeks ago, Meta announced that it is taking steps to train its AIs not to respond to teens on self-harm, suicide, disordered eating and potentially inappropriate romantic conversations, as well as to limit teenagers’ access to a select group of AI characters.
Meanwhile, Character.AI has “invested a tremendous amount of resources in Trust and Safety” over the past year, a spokesperson for the company said. That includes a different model for minors, a “Parental Insights” feature and prominent in-chat disclaimers to remind users that its bots are not real people.
The company’s “hearts go out to the families who spoke at the hearing today. We are saddened by their losses and send our deepest sympathies to the families,” the spokesperson said.
“Earlier this year, we provided senators on the Judiciary Committee with requested information, and we look forward to continuing to collaborate with legislators and offer insight on the consumer AI industry and the space’s rapidly evolving technology,” the spokesperson added.
Still, those who addressed lawmakers on Tuesday emphasized that technological innovation cannot come at the cost of people’s lives.
“Our children are not experiments, they’re not data points or profit centers,” said a woman who testified as Jane Doe, her voice shaking as she spoke. “They’re human beings with minds and souls that cannot simply be reprogrammed once they are harmed. If me being here today helps save one life, it is worth it to me. This is a public health crisis that I see. This is a mental health war, and I really feel like we are losing.”
AI Insights
Uber Adds Kenya Wildlife Safaris With Eye on $4 Billion Industry

Uber Technologies Inc has launched Uber Safari for expeditions into the Nairobi National Park, the world’s only wildlife park within a capital city.
Source link
AI Insights
Transforming cancer care with Artificial Intelligence

According to the Global Cancer Observatory, cancer remains one of the country’s most pressing public health challenges, with nearly 58,000 new cases and close to 20,000 deaths recorded in 2022. Achieving fully personalized care remains challenging due to fragmented data and limited integration across institutions. A closer coordination across the national healthcare network will result in more effective and equitable treatments for patients.
NAIPO (National AI Initiative for Precision Oncology) responds to this need with an integrated, AI-powered precision oncology platform to transform cancer care delivery. By applying advanced AI models at every stage of the patient journey, it aims to optimize diagnostics, personalize treatments, and support data-driven clinical decision-making. “Building on lessons from previous efforts in precision oncology in Switzerland, our initiative targets the development of novel, clinically informed AI tools by seamlessly integrating a common data platform, continuously adapting robust models, and designing effective clinical interfaces and patient apps.” says Dorina Thanou, lead of the initiative at the EPFL AI Center.
Selected as a Flagship Initiative byInnosuisse, the Swiss Innovation Agency, NAIPO will unfold over four years under the leadership of the EPFL AI Center and ETH AI Center, uniting a large transdisciplinary team from a wide array of institutions including the Swiss Data Science Center (SDSC), the Swiss National Supercomputing Centre (CSCS), the Universities of Applied Sciences and Arts of Northwestern Switzerland, the Bern University of Applied Sciences, the Universities and University Hospitals of Basel, Bern, Geneva, and Zurich, Debiopharm, Roche, SOPHIA GENETICS, Switch, Tune Insight, as well as the regional hospitals of Aarau, Baden, Ticino, Luzern and Winterthur and the private clinics of Hirslanden and Swiss Medical Network. With an expected total cost of CHF 18.9 million, the project will receive approximately CHF 8.25 million in public funding from Innosuisse with the remaining amount coming from the implementation partners.
Transforming cancer research
NAIPO pioneers new AI approaches in cancer research and care, from clinical decision-support agents and large language models for records mining, to foundation models for treatment response prediction and privacy-preserving approaches. “Combined with high-throughput experimental models and patient avatars, these technologies will allow us to capture and model each patient’s uniqueness.The program will redefine AI’s role in medicine and strengthen Switzerland’s position as a leader in medical AI innovation” said Elisa Oricchio, director of the Swiss Institute of Experimental Cancer Research (ISREC) at EPFL
“Tailoring predictions and recommendations to individual patients is one of the most exciting aspects of NAIPO,” said Charlotte Bunne, professor at EPFL working on model development. “Our models will continuously learn from curated biomedical literature, as well as from individual biological and clinical data to identify potential new targets, biomarkers, and investigational drugs. Novel AI-driven insights will be integrated with clinically validated models and translated into decision-support systems.” Placing patients’ specific needs at the center of the initiative, dedicated solutions will be developed, such as a mobile app, to enhance communication and ensure patients remain actively informed and engaged throughout their care.
Deployment and long-term vision
The program’s roadmap foresees clinical pilots at university and cantonal hospitals and private clinics, leading to an initial rollout at participating hospitals nationwide within four years. In addition to advancing cancer care, the infrastructure is intended to serve as a model for future applications in other disease domains.
“This initiative marks a transition toward a proactive model for precision oncology,” said Olivier Michielin, Head of Precision Oncology at Geneva University Hospitals (HUG) and Clinical Co-Coordinator of the project. “It reflects a commitment to ensuring that all patients, regardless of where they are treated within this network, benefit from the latest advances in AI-supported medicine.”
Secure, privacy-conscious collaboration is central to the initiative. Using modern data governance, the infrastructure will enable collective intelligence without centralizing sensitive health data. “We’re creating a secure and federated system that allows collaboration across institutions without compromising privacy,” said Nora Toussaint, Lead Health & Biomedical at the Swiss Data Science Center (SDSC). “Trust and transparency will be built into the design.”
“NAIPO is exactly what clinical oncology needs today. We are able to produce much more data than a couple of years ago, but we often don’t know how to integrate this in actual patient care. NAIPO is instrumental to close this gap.” Says Andreas Wicki, oncology professor at the University of Zurich and Clinical Co-Coordinator of the project.
NAIPO’s long-term vision includes reducing disparities in access, accelerating the discovery of new biomarkers and treatments, and supporting sustainable innovation across the Swiss healthcare system. Milestones and key results will be shared as the project progresses.

AI Insights
Free AI, data science lecture series launched at UH Mānoa

The University of Hawaiʻi at Mānoa launched a free artificial intelligence (AI) and data science public lecture series on September 15, with a talk by Eliane Ubalijoro, chief executive officer of the Center for International Forestry Research and World Agroforestry. Ubalijoro, based in Nairobi, Kenya, spoke on AI governance policies and ethics for managing land, biodiversity and fire.

The event, hosted at the Walter Dods, Jr. RISE Center, was organized by the Department of Information and Computer Sciences (ICS) in partnership with the Pacific Asian Center for Entrepreneurship (PACE). It kicked off a four-part series designed to share industry and government perspectives on emerging issues in AI and data science.
All lectures are open to students, professionals and community members, providing another avenue for the public to engage with UH Mānoa’s new graduate certificate and professional master’s program in AI and data science. The series is tied to ICS 601, the Applied Computing Industry Seminar, which connects students to real-world applications of AI.
“This series opens the door for our students and community to learn directly from leaders shaping the future of AI and data science,” said Department of Information and Computer Sciences Chair and Professor Guylaine Poisson.
PACE Executive Director Sandra Fujiyama added, “By bringing these talks into the public sphere, we’re strengthening the bridge between UH Mānoa, industry sectors and Hawaiʻi’s innovation community.”
Three additional talks are scheduled this fall:
- September 22, 12–1:15 p.m.: Rebecca Cai, chief data officer for the State of Hawaiʻi, will discuss government data and AI use cases.
- October 13, 12–1:15 p.m.: Shovit Bhari of IBM will share industry lessons on machine learning.
- November 10, 12–1:15 p.m.: Peter Dooher, senior vice president at Digital Service Pacific Inc., will cover designing end-to-end AI systems.
Register for the events at the PACE website.
ICS is housed in UH Mānoa’s College of Natural Sciences and PACE is housed in UH Mānoa’s Shidler College of Business.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries