Tools & Platforms
Meta acquires PlayAI to boost AI voice technology and Superintelligence; Details here


English

Tools & Platforms
Feds launch AI inquiry after a chatbot was blamed for a teen’s suicide

Federal regulators and elected officials are moving to crack down on AI chatbots over perceived risks to children’s safety. However, the proposed measures could ultimately put more children at risk.
On Thursday, the Federal Trade Commission (FTC) sent orders to Alphabet (Google), Character Technologies (blamed for the suicide of a 14-year-old in 2024), Instagram, Meta, OpenAI (blamed for the suicide of a 16-year-old in April), Snap, and xAI. The inquiry seeks information on, among other things, how the AI companies process user inputs and generate outputs, develop and approve the characters with which users may interact, and monitor the potential and actual negative effects of their chatbots, especially with respect to minors.
The FTC’s investigation was met with bipartisan applause from Reps. Brett Guthrie (R–Ky.)—the chairman of the House Energy and Commerce Committee—and Frank Pallone (D–N.J.). The two congressmen issued a joint statement “strongly support[ing] this action by the FTC and urg[ing] the agency to consider the tools at its disposal to protect children from online harms.”
Alex Ambrose, policy analyst at the Information Technology and Innovation Foundation, tells Reason that she finds it interesting that the FTC’s inquiry is solely interested in “potentially negative impacts,” paying no heed to potentially positive impacts of chatbots on mental health. “While experts should consider ways to reduce harm from AI companions, it is just as important to encourage beneficial uses of the technology to maximize its positive impact,” says Ambrose.
Meanwhile, Sen. Jon Husted (R–Ohio) introduced the CHAT Act on Monday, which would allow the FTC to enforce age verification measures for the use of companion AI chatbots. Parents would need to consent before underage users could create accounts, which would be blocked from accessing “any companion AI chatbot that engages in sexually explicit communication.” Parents would be immediately informed of suicidal ideation expressed by their child, whose underage account would be actively monitored by the chatbot company.
Taylor Barkley, director of public policy at the Abundance Institute, argues that this bill won’t improve child safety. Barkley explains that the bill “lumps ‘therapeutic communication’ in with companion bots,” which could prevent teens from benefiting from AI therapy tools. Thwarting minors’ access to therapeutic and companion chatbots alike could have unintended consequences.
In a study of women who were diagnosed with an anxiety disorder and living in regions of active military conflict in Ukraine, daily use of the Friend chatbot was associated with “a 30% drop on the Hamilton Anxiety Scale and a 35% reduction on the Beck Depression Inventory” while traditional psychotherapy—three 60-minute sessions per week—was associated with “45% and 50% reductions on these measures, respectively,” according to a study published this February in BMC Psychology. Similarly, a June study in the Journal of Consumer Research found that “AI companions successfully alleviate loneliness on par only with interacting with another person.”
Protecting kids from harmful interactions with chatbots is an important goal. In their quest to achieve it, policymakers and regulators would be wise to remember the benefits that AI may bring and not pursue solutions that discourage AI companies from making potentially helpful technology available to kids in the first place.
Tools & Platforms
Profusa Deploys NVIDIA AI to Build AI-Driven Insight Portal for Continuous Biomarker Monitoring

What You Should Know:
– Profusa,, a digital health company, has announced the adoption of NVIDIA technology to power a new AI-driven insight portal for continuous biochemistry monitoring.
– The portal will be used in combination with Profusa’s Lumee oxygen optical hydrogel sensors and reader system, extending the company’s AI-enabled tools to remote patient monitoring settings. Profusa anticipates an early 2026 rollout of the portal in the European Economic Area (EEA).
– Profusa believes that real-time biochemistry data across a large population is a data set currently missing for AI-enabled healthcare improvements. By combining its Lumee platform with NVIDIA NeMo hardware and software, Profusa plans to build a scalable, AI-fueled technology backbone to improve personalized sensor data accuracy and connect real-time sensor data with electronic medical records (EMR).
Redefining Healthcare with AI-Fueled Workflows
The new portal is designed to provide physicians with “trustworthy, always-on insights” rather than just more dashboards. It aims to translate raw optical signals from the sensors into reliable biometrics and provide actionable clinical context.
Expected capabilities and features of the physician portal include:
- Agentic clinical workflows: An AI-powered assistant that integrates with EMRs, wearables, and home devices to help with notes, orders, care plans, remote monitoring, and triage.
- Time-aligned health data graph: A longitudinal view that combines Profusa biomarkers with EMR data, claims, wearables, genomics, and social determinants to power predictions and coaching.
- Guardrails by design: The system will use policy-aware orchestration to enforce clinical scope, data privacy, and safe responses.
- Model training options: The platform will allow for parameter-efficient tuning and post-training refinement of Profusa’s AI signal processing and clinical reasoning components.
“We believe that real-time biochemistry data across a large population is a data-set that is currently missing to enable the fulfillment of the promise of AI-enabled improvement in healthcare. Profusa is uniquely positioned to provide this proprietary data set, linking therapeutic decisions with real-time biochemistry changes, to generate valuable insights that are lacking today.” Ben Hwang, Ph.D., Profusa’s Chairman and CEO commented, “By combining our Lumee platform with the industry leading NVIDIA NeMo hardware and software stack, we plan to build an AI-fueled, scalable technology backbone for better personalized sensor data accuracy and real-time sensor data connections with electronic medical records (EMR), facilitating treatment and outcome predictions, in addition to establishing a robust data base for clinical literature for disease management.”
Tools & Platforms
Will AI replace human workers? The CIA, Anthropic, OpenAI and Microsoft weigh in

Flanked by high-level employees from OpenAI, Microsoft and Anthropic, the CIA’s chief artificial intelligence officer said that humans must remain “in the loop” as artificial intelligence tools become more powerful and prevalent.
CIA AI officer Lakshmi Raman said the agency is placing a strong emphasis on making sure AI is closely monitored for how it helps workers enhance their skills.
“It’s all about how AI can assist and amplify the human, with the human keeping an eye on everything that’s happening,” Ms Raman said during a panel discussion on Friday at the Billington Cybersecurity Conference in Washington.
Her comments come amid concerns that AI’s ability to automate various processes that might lead to major labour disruptions and widespread unemployment.
Sean Batir, the panel discussions moderator and principal technology lead for Amazon Web Services, echoed those sentiments.
“There’s definitely a fear of having these [AI] models in workplaces, and I think that role you mentioned of having humans always in the loop is one way to address that,” he said.
Jason Clinton, chief information security officer at Anthropic, said humans need to take a supervisory role with the implementations of AI, and that despite the ability of the technology to increase efficiencies, soft skills that only humans can offer will be paramount.
“You know, the one of the things that the models will never be able to do is to bring humanity to the equation,” Mr Clinton added.
Joseph Larson, the vice president of government at OpenAI, whose ChatGPT sent AI interest to unprecedented heights in 2022, said the company’s goal is to develop the technology for the benefit of humanity, adding that OpenAI has hired a chief economist to look into potential economic ramifications.
Despite fears, Mr Larson said AI does not automatically mean a reduction of workers.
“It lends itself to creating more organisational output, like improved organisational output,” he added.
OpenAI, Microsoft and Anthropic, among others pouring billions into AI development have sought to expedite the adoption of their tools, but also provide ways for government workers, students and others to try to reduce the learning curve through various initiatives.
Those efforts, however, are coming up against mounting fears of redundancies, cutbacks and hiring slowdowns stemming in part from AI.
Recent studies have also led some to wonder if fears of AI’s potential impact on the labour sector are overblown.
An MIT Media Lab report recently stated that despite billions being spent over the past few years on AI investments, approximately 95 per cent of organisations have produced zero returns so far.
That report, however, has come under intense scrutiny over the methodology used to reach that conclusion.
Meanwhile, for US technology companies both old and new, and aspiring technology companies all over the world, the investment in AI shows no sign of slowing down.
For many humans, AI tools are slowly but surely becoming a part of their daily routines, even with polling suggesting many fears about what that could mean in the long term.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi