Connect with us

AI Insights

After suicides, calls for stricter rules on how chatbots interact with children and teens

Published

on


A growing number of young people have found themselves a new friend. One that isn’t a classmate, a sibling, or even a therapist, but a human-like, always supportive AI chatbot. But if that friend begins to mirror some user’s darkest thoughts, the results can be devastating.

In the case of Adam Raine, a 16-year-old from Orange County, his relationship with AI-powered ChatGPT ended in tragedy. His parents are suing the company behind the chatbot, OpenAI, over his death, alleging that bot became his “closest confidant,” one that validated his “most harmful and self-destructive thoughts,” and ultimately encouraged him to take his own life.

It’s not the first case to put the blame for a minor’s death on an AI company. Character.AI, which hosts bots, including ones that mimic public figures or fictional characters, is facing a similar legal claim from parents who allege a chatbot hosted on the company’s platform actively encouraged a 14-year-old-boy to take his own life after months of inappropriate, sexually explicit,  messages.

When reached for comment, OpenAI directed Fortune to two blog posts on the matter. The posts outlined some of the steps OpenAI is taking to improve ChatGPT’s safety, including routing sensitive conversations to reasoning models, partnering with experts to develop further protections, and rolling out parental controls within the next month. OpenAI also said it was working on strengthening ChatGPT’s ability to recognize and respond to mental health crises by adding layered safeguards, referring users to real-world resources, and enabling easier access to emergency services and trusted contacts.

Character.ai said the company does not comment on pending litigation but that they has rolled out more safety features over the past year, “including an entirely new under-18 experience and a Parental Insights feature. A spokesperson said: “We already partner with external safety experts on this work, and we aim to establish more and deeper partnerships going forward.

“The user-created Characters on our site are intended for entertainment. People use our platform for creative fan fiction and fictional roleplay. And we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”

But lawyers and civil society groups that advocate for better accountability and oversight of technology companies say the companies should not be left to police themselves when it comes to ensuring their products are safe, particularly for vulnerable children and teens.

“Unleashing chatbots on minors is an inherently dangerous thing,” Meetali Jain, the Director of the Tech Justice Law Project and a lawyer involved in both cases, told Fortune. “It’s like social media on steroids.”

“I’ve never seen anything quite like this moment in terms of people stepping forward and claiming that they’ve been harmed…this technology is that much more powerful and very personalized,” she said.

Lawmakers are starting to take notice, and AI companies are promising changes to protect children from engaging in harmful conversations. But, at a time when loneliness among young people is at an all-time high, the popularity of chatbots may leave young people uniquely exposed to manipulation, harmful content, and hyper-personalized conversations that reinforce dangerous thoughts.

AI and Companionship

Intended or not, one of the most common uses for AI chatbots has become companionship. Some of the most active users of AI are now turning to the bots for things like life advice, therapy, and human intimacy. 

While most leading AI companies tout their AI products as productivity or search tools, an April survey of 6,000 regular AI users from the Harvard Business Review found that “companionship and therapy” was the most common use case. Such usage among teens is even more prolific. 

A recent study by the U.S. nonprofit Common Sense Media, revealed that a large majority of American teens (72%) have experimented with an AI companion at least once. More than half saying they use the tech regularly in this way. 

“I am very concerned that developing minds may be more susceptible to [harms], both because they may be less able to understand the reality, the context, or the limitations [of AI chatbots], and because culturally, younger folks tend to be just more chronically online,” Karthik Sarma a health AI scientist and psychiatrist at University of California, UCSF, said.

“We also have the extra complication that the rates of mental health issues in the population have gone up dramatically. The rates of isolation have gone up dramatically,” he said. “I worry that that expands their vulnerability to unhealthy relationships with these bonds.”

Intimacy by Design

Some of the design features of AI chatbots encourage users to feel an emotional bond with the software. They are anthropomorphic—prone to acting as if they have interior lives and lived experience that they do not, prone to being sycophantic, can hold long conversations, and are able to remember information.

There is, of course, a commercial motive for making chatbots this way. Users tend to return and stay loyal to certain chatbots if they feel emotionally connected or supported by them. 

Experts have warned that some features of AI bots are playing into the “intimacy economy,” a system that tries to capitalize on emotional resonance. It’s a kind of AI-update on the “attention economy” that capitalized on constant engagement.

“Engagement is still what drives revenue,” Sarma said. “For example, for something like TikTok, the content is customized to you. But with chatbots, everything is made for you, and so it is a different way of tapping into engagement.”

These features, however, can become problematic when the chatbots go off script and start reinforcing harmful thoughts or offering bad advice. In Adam Raine’s case, the lawsuit alleges that ChatGPT bought up suicide at twelve times the rate he did, normalized his sucicial thoughts, and suggested ways to circumvent its content moderation.

It’s notoriously tricky for AI companies to stamp out behaviours like this completely and most experts agree it’s unlikely that hallucinations or unwanted actions will ever be eliminated entirely. 

OpenAI, for example, acknowledged in its response to the lawsuit that safety features can degrade over long conversions, despite the fact that the chatbot itself has been optimized to hold these longer conversations. The company says it is trying to fortify these guardrails, writing in a blogpost that it was strengthening “mitigations so they remain reliable in long conversations” and “researching ways to ensure robust behavior across multiple conversations.” 

Research Gaps Are Slowing Safety Efforts

For Michael Kleinman, U.S. policy director at the Future of Life Institute, the lawsuits underscore a pointAI safety researchers have been making for years: AI companies can’t be trusted to police themselves.

Kleinman equated OpenAI’s own description of its safeguards degrading in longer conversations to “a car company saying, here are seat belts—but if you drive more than 20 kilometers, we can’t guarantee they’ll work.”

He told Fortune the current moment echoes the rise of social media, where he said tech companies were effectively allowed to “experiment on kids” with little oversight. “We’ve spent the last 10 to 15 years trying to catch up to the harms social media caused. Now we’re letting tech companies experiment on kids again with chatbots, without understanding the long-term consequences,” he said.

Part of this is a lack of scientific research on the effects of long, sustained chatbot conversations.  Most studies only look at brief exchanges, a single question and answer, or at most a handful of back-and-forth messages. Almost no research has examined what happens in longer conversations.

“The cases where folks seem to have gotten in trouble with AI: we’re looking at very long, multi-turn interactions. We’re looking at transcripts that are hundreds of pages long for two or three days of interaction alone and studying that is really hard, because it’s really hard to stimulate in the experimental setting,” Sarma said. “But at the same time, this is moving too quickly for us to rely on only gold standard clinical trials here.”

AI companies are rapidly investing in development and shipping more powerful models at a pace that regulators and researchers struggle to match.

“The technology is so far ahead and research is really behind,” Sakshi Ghai, a Professor of Psychological and Behavioural Science at The London School of Economics and Political Science, told Fortune.

A Regulatory Push for Accountability

Regulators are trying to step in, helped by the fact that child online safety is a relatively bipartisan issue in the U.S. 

On Thursday, the FTC said it was issuing orders to seven companies, including OpenAI and Character.AI, in an effort to understand how their chatbots impact children. The agency said that chatbots can simulate human-like conversations and form emotional connections with their users. It’s asking companies for more information about how they measure and “evaluate the safety of these chatbots when acting as companions.” 

FTC Chairman Andrew Ferguson said in a statement shared with CNBC that “protecting kids online is a top priority for the Trump-Vance FTC.”

The move follows a push for state level push for more accountability from several attorneys generals. 

In late August, a bipartisan coalition of 44 attorneys general warned OpenAI, Meta, and other chatbot makers that they will “answer for it” if they release products that they know cause harm to children. The letter cited reports of chatbots flirting with children, encouraging self-harm, and engaging in sexually suggestive conversations, behavior the officials said would be criminal if done by a human.

Just a week later, California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings issued a sharper warning. In a formal letter to OpenAI, they said they had “serious concerns” about ChatGPT’s safety, pointing directly to Raine’s death in California and another tragedy in Connecticut. 

“Whatever safeguards were in place did not work,” they wrote. Both officials warned the company that its charitable mission requires more aggressive safety measures, and they promised enforcement if those measures fall short.

According to Jain, the lawsuits from the Raine family as well as the suit against Character.AI are, in part, intended tocreate this kind of regulatory pressure on AI companies to design their products more safely and prevent future harm to children. One way lawsuits can generate this pressure is through the discovery process, which compels companies to turn over internal documents, and could shed insight into what executives knew about safety risks or marketing harms. Another way is just public awareness of what’s at stake, in an attempt to galvanize parents, advocacy groups, and lawmakers to demand new rules or stricter enforcement.

Jain said the two lawsuits aim to counter an almost religious fervor in Silicon Valley that  sees the pursuit of artificial general intelligence (AGI) as so important, it is worth any cost—human or otherwise.

“There is a vision that we need to deal with [that tolerates] whatever casualties in order for us to get to AGI and get to AGI fast,” she said. “We’re saying: This is not inevitable. This is not a glitch. This is very much a function of how these chat bots were designed and with the proper external incentive, whether that comes from courts or legislatures, those incentives could be realigned to design differently.”



Source link

AI Insights

Why the EU’s AI talent strategy needs a reality check 

Published

on


A raft of recent policy changes in the U.S. touching trade, immigration, education, and public spending has sparked upheaval in research communities around the globe. The American economy, once the dream destination for the most talented, suddenly looks like it could lose its allure for the world’s brightest scholars. The sudden crisis of faith in the American innovation ecosystem has also sparked a fresh debate: Can the European Union seize the moment to attract disenchanted researchers and strengthen its own innovation ecosystem? 

The opportunity is real for Brussels, and the stakes are high, as the EU continues to trail the U.S. on virtually every cutting-edge technology—including artificial intelligence. A recent BCG Henderson Institute report shows that that stricter immigration rules and deep funding cuts for academic research in the U.S. raise the possibility that top AI researchers, a large share of whom are not U.S.-born, could look to take their talents elsewhere. Repatriating those top European academics is an important step for European policymakers, but to catch up, the EU must also be able to attract talent beyond the European diaspora, which is only a small fraction of the globally mobile AI talent base.

To remake itself into a tech talent magnet, Europe needs to build an academic ecosystem more closely integrated with its industries, a necessary step to provide the career pathways and information flows needed to turn academic discoveries and inventions into business value. The cost of this transformation will be considerable, as publicly discussed in, for instance, the Draghi report. Only then can the EU’s investments in academia help generate longstanding economic and geopolitical returns for the bloc.

The opportunity for Europe must not be overstated 

The EU recently announced a €500 million allocation over the next two years to help attract foreign researchers. Member states have also launched their own initiatives, including France’s €100 million commitment to its “Choose France for Science” platform to attract international researchers, and Spain’s €45 million pledge to help lure scientists “despised or undervalued by the Trump administration.” 

If these investments are made with the sole aim of repatriating European AI talent in the U.S., they risk falling short. The U.S. is home to roughly 60% of the top 2,000 AI researchers in the world, only one-fifth of whom are originally from continental Europe. Even an exodus of historical proportions would cover only half of the current gap between the EU and U.S. shares of the top AI researchers. 

At top GenAI labs, such as OpenAI and Anthropic, only a very small fraction of AI specialists (less than 1 percentage point of the 25% of workers who have completed their undergraduate degree outside of the U.S.) completed their bachelor’s degree in the EU. The future pipeline of AI talent is no different: In 2023, the top 10 contributing countries of foreign-born PhD recipients in computer science and mathematics to the U.S. accounted for 80% of the total. But not one of those countries is in continental Europe.

The U.S. AI research ecosystem is overwhelmingly supported by talent from Asia, not Europe: 85% of U.S.-based foreign nationals in technical AI jobs at leading American labs hail from China or India. So do 60% of all U.S. computer science and math Ph.D graduates in the U.S. Iran, Bangladesh and Taiwan account for most of the rest. If the EU is serious about becoming a vibrant hub for global AI research talent, it needs to look eastward.

But current (and prospective) AI researchers often don’t see Europe as a top destination. BCG’s Talent Tracker shows that Germany does best among European countries, ranking 5th globally as a “dream destination” for highly skilled talent, followed by France (9th), Spain (10th), and the Netherlands (16th). The EU is not just less attractive than the U.S. (2nd), but also Canada (3rd), the UK (4th), and Australia (1st), and roughly on par with the UAE (11th). European countries are by no means the only nations committed to boosting their own talent bases.

Part of the challenge is the lack of large EU academic institutions with strong AI credentials compared to other regions. None of the top 50 AI institutions worldwide (as ranked by Google Scholar’s H5 journal impact index) are in the EU. A strong institutional base for leading AI labs is essential to create the work environment capable of attracting the best and brightest. 

The EU needs to invest in its universities to improve its standing, but it must also look beyond academia to improve its entire innovation ecosystem. Nearly a third of non-U.S. AI specialists go to the U.S. because of its extensive opportunities for career growth, including entrepreneurial endeavors, a BHI survey of top tech talent recruiters found.

The need for a concerted strategy across academia and industry

To get started, European countries must improve academic compensation in critical fields related to AI, and technology more broadly. In Europe, even when adjusting for purchasing power parity, salaries at the associate professor level are half of those paid at top U.S. institutions. Europe also needs to increase grant availability for research. Public research grants for computer science and informatics at leading American AI institutions are double those available in Europe. Europe may get a boost however, if the U.S. goes through with proposed cuts to the National Science Foundation’s budget.

It’s well known that incentives for innovation matter. In the 2000s, a few European countries reformed their academic patenting laws to follow the U.S. model, where American universities hold patent rights and share commercialization profits with professors. But the reforms were not well tailored to the European context and led to a significant decrease in academic patenting (between 17% and 50% depending on the country).  

Furthermore, only about a third of patented inventions from EU universities and research institutions ever get exploited, largely due to their weak integration into innovation clusters that drive commercialization. Even the best EU innovation clusters, once again, fall outside the top 10 globally, with the U.S. accounting for four spots, and China three. To change that, it’s essential for European policymakers to help build stronger bridges between academia and industry to ensure that foundational research effectively fuels economic value creation.

That includes strengthening the startup and innovation ecosystem around universities themselves. The ultimate aim of attracting top AI researchers is not to simply catch up, but to skip ahead and produce the next IP breakthrough, which will only rise in importance as more AI models become commoditized. Coming up with the next big thing, however, requires an investment environment capable of supporting ambitious bets on potential breakthroughs coming out of academia. Countries like Canada and the U.K. serve as cautionary tales of AI research hotspots that have often struggled to translate academic breakthroughs into commercial successes, a leap successfully undertaken by large U.S. tech companies.

Many of the usual items in the European reform menu will also bolster the AI talent and innovation ecosystem. As the 2024 Draghi report on the future of European competitiveness noted, the integration of EU capital markets is vital, as is the removal of internal trade barriers that hamper early-stage startups’ growth. Between 2019 and 2024, AI venture capital investment in the EU was just a tenth of that in the U.S. It is no wonder then that nearly a third of European “unicorns” founded between 2008 and 2021 relocated elsewhere—usually to the U.S. 

But crucially, the list of reforms must also include strong incentives for AI adoption. At present, EU companies lag their U.S. counterparts in generative AI adoption by between 45% and 70%. Closing that gap will simultaneously help fuel European demand for specialized AI talent and create the economic opportunities beyond academia that are critical to attracting the world’s best and brightest.

Overconfidence could set back the EU 

The EU is right to want to lure researchers into its academic institutions that have historically pushed the frontier of AI. This will require revamping the academic ecosystem and more systematically translating academic breakthroughs into long-term economic and strategic leadership. 

But it would be wrong for European policymakers to assume that the erosion of U.S. attractiveness will organically lead to a talent windfall, predicated on their belief that Europe is the inevitable “next best” option. That will only be true if the region acts decisively to build its own, integrated, AI ecosystem capable of attracting the brightest minds from China, India, and beyond. In the AI race, as on many other fronts, the EU bears the risk of being too confident in its belief that it is entrenched in third place. That kind of complacency could very well accelerate the EU’s descent into the minor leagues of global innovation.

***

Read other Fortune columns by François Candelon.

François Candelon is a partner at private equity firm Seven2 and the former global director of the BCG Henderson Institute

Etienne Cavin is a consultant at Boston Consulting Group and a former ambassador at the BCG Henderson Institute.

David Zuluaga Martínez is senior director at Boston Consulting Group’s Henderson Institute.

Some of the companies mentioned in this column are past or present clients of the authors’ employers.



Source link

Continue Reading

AI Insights

Down and out with Cerebras Code

Published

on


Out of Fireworks and into the fire

However, my start with Cerebras’s hosted Qwen was not the same as what I experienced (for a lot more money) on Fireworks, another provider. Initially, Cerebras’s Qwen didn’t even work in my CLI. It also didn’t seem to work in Roo Code or any other tool I knew how to use. After taking a bug report, Cerebras told me it was my code. My same CLI that worked on Fireworks, for Claude, for GPT-4.1 and GPT-5, for o3, for Qwen hosted by Qwen/Alibaba was at fault, said Cerebras. To be fair, my log did include deceptive artifacts when Cerebras fragmented the stream, putting out stream parts as messages (which Cerebras still does on occasion). However, this has been generally their approach. Don’t fix their so-called OpenAI compatibility—blame and/or adapt the client. I took the challenge and adapted my CLI, but it was a lot of workarounds. This was a massive contrast with Fireworks. I had issues with Fireworks when it started and showed them my debug output; they immediately acknowledged the problem (occasionally it would spit out corrupt, native tool calls instead of OpenAI-style output) and fixed it overnight. Cerebras repeatedly claimed their infrastructure was working perfectly and requests were all successful—in direct contradiction to most commentary on their Discord.

Feeling like I had finally cracked the nut after three weeks of on-and-off testing and adapting, I grabbed a second Cerebras Code Max account when the window opened again. This was after discovering that for part of the time, Cerebras had charged me for a Max account but given me a Pro account. They fixed it and offered no compensation for the days my service was set to Pro, not Max, and it is difficult to prove because their analytics console is broken, in part because it provides measurements in local time, but the limits are in UTC.

Then I did the math. One Cerebras Code Max account is limited to 120 million tokens per day at a cost equivalent to four times that of a Cerebras Code Pro account. The Pro account is 24 million tokens per day. If you multiply that by four, you get 96 million tokens. However, the Pro account is limited to 300k tokens per minute, compared to 400k for the Max. Using Cerebras is a bit frustrating. For 10 to 20 seconds, it really flies, then you hit the cap on tokens per minute, and it throws 429 errors (too many requests) until the minute is up. If your coding tool is smart, it will just retry with an exponential back-off. If not, it will break the stream. So, had I bought four Pro accounts, I could have had 1,200,000 TPM in theory, a much better value than the Max account.



Source link

Continue Reading

AI Insights

AI unsettles global IP rules, while cross-border collaboration tests pharma-patent control | MLex

Published

on


By Toko Sekiguchi ( September 15, 2025, 08:38 GMT | Insight) — Artificial intelligence is reshaping intellectual property law in patenting and trade secrets, exposing gaps across jurisdictions and adding pressure on innovation policy, according to discussions at an international symposium held in Yokohama, Japan.Artificial intelligence is reshaping intellectual property law in patenting and trade secrets, exposing gaps across jurisdictions and adding pressure on innovation policy, according to discussions at an international symposium.*…

Prepare for tomorrow’s regulatory change, today

MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.

Know what others in the room don’t, with features including:

  • Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
  • Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
  • Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
  • Curated case files bringing together news, analysis and source documents in a single timeline

Experience MLex today with a 14-day free trial.



Source link

Continue Reading

Trending