AI Insights
After suicides, calls for stricter rules on how chatbots interact with children and teens

A growing number of young people have found themselves a new friend. One that isn’t a classmate, a sibling, or even a therapist, but a human-like, always supportive AI chatbot. But if that friend begins to mirror some user’s darkest thoughts, the results can be devastating.
In the case of Adam Raine, a 16-year-old from Orange County, his relationship with AI-powered ChatGPT ended in tragedy. His parents are suing the company behind the chatbot, OpenAI, over his death, alleging that bot became his “closest confidant,” one that validated his “most harmful and self-destructive thoughts,” and ultimately encouraged him to take his own life.
It’s not the first case to put the blame for a minor’s death on an AI company. Character.AI, which hosts bots, including ones that mimic public figures or fictional characters, is facing a similar legal claim from parents who allege a chatbot hosted on the company’s platform actively encouraged a 14-year-old-boy to take his own life after months of inappropriate, sexually explicit, messages.
When reached for comment, OpenAI directed Fortune to two blog posts on the matter. The posts outlined some of the steps OpenAI is taking to improve ChatGPT’s safety, including routing sensitive conversations to reasoning models, partnering with experts to develop further protections, and rolling out parental controls within the next month. OpenAI also said it was working on strengthening ChatGPT’s ability to recognize and respond to mental health crises by adding layered safeguards, referring users to real-world resources, and enabling easier access to emergency services and trusted contacts.
Character.ai said the company does not comment on pending litigation but that they has rolled out more safety features over the past year, “including an entirely new under-18 experience and a Parental Insights feature. A spokesperson said: “We already partner with external safety experts on this work, and we aim to establish more and deeper partnerships going forward.
“The user-created Characters on our site are intended for entertainment. People use our platform for creative fan fiction and fictional roleplay. And we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”
But lawyers and civil society groups that advocate for better accountability and oversight of technology companies say the companies should not be left to police themselves when it comes to ensuring their products are safe, particularly for vulnerable children and teens.
“Unleashing chatbots on minors is an inherently dangerous thing,” Meetali Jain, the Director of the Tech Justice Law Project and a lawyer involved in both cases, told Fortune. “It’s like social media on steroids.”
“I’ve never seen anything quite like this moment in terms of people stepping forward and claiming that they’ve been harmed…this technology is that much more powerful and very personalized,” she said.
Lawmakers are starting to take notice, and AI companies are promising changes to protect children from engaging in harmful conversations. But, at a time when loneliness among young people is at an all-time high, the popularity of chatbots may leave young people uniquely exposed to manipulation, harmful content, and hyper-personalized conversations that reinforce dangerous thoughts.
AI and Companionship
Intended or not, one of the most common uses for AI chatbots has become companionship. Some of the most active users of AI are now turning to the bots for things like life advice, therapy, and human intimacy.
While most leading AI companies tout their AI products as productivity or search tools, an April survey of 6,000 regular AI users from the Harvard Business Review found that “companionship and therapy” was the most common use case. Such usage among teens is even more prolific.
A recent study by the U.S. nonprofit Common Sense Media, revealed that a large majority of American teens (72%) have experimented with an AI companion at least once. More than half saying they use the tech regularly in this way.
“I am very concerned that developing minds may be more susceptible to [harms], both because they may be less able to understand the reality, the context, or the limitations [of AI chatbots], and because culturally, younger folks tend to be just more chronically online,” Karthik Sarma a health AI scientist and psychiatrist at University of California, UCSF, said.
“We also have the extra complication that the rates of mental health issues in the population have gone up dramatically. The rates of isolation have gone up dramatically,” he said. “I worry that that expands their vulnerability to unhealthy relationships with these bonds.”
Intimacy by Design
Some of the design features of AI chatbots encourage users to feel an emotional bond with the software. They are anthropomorphic—prone to acting as if they have interior lives and lived experience that they do not, prone to being sycophantic, can hold long conversations, and are able to remember information.
There is, of course, a commercial motive for making chatbots this way. Users tend to return and stay loyal to certain chatbots if they feel emotionally connected or supported by them.
Experts have warned that some features of AI bots are playing into the “intimacy economy,” a system that tries to capitalize on emotional resonance. It’s a kind of AI-update on the “attention economy” that capitalized on constant engagement.
“Engagement is still what drives revenue,” Sarma said. “For example, for something like TikTok, the content is customized to you. But with chatbots, everything is made for you, and so it is a different way of tapping into engagement.”
These features, however, can become problematic when the chatbots go off script and start reinforcing harmful thoughts or offering bad advice. In Adam Raine’s case, the lawsuit alleges that ChatGPT bought up suicide at twelve times the rate he did, normalized his sucicial thoughts, and suggested ways to circumvent its content moderation.
It’s notoriously tricky for AI companies to stamp out behaviours like this completely and most experts agree it’s unlikely that hallucinations or unwanted actions will ever be eliminated entirely.
OpenAI, for example, acknowledged in its response to the lawsuit that safety features can degrade over long conversions, despite the fact that the chatbot itself has been optimized to hold these longer conversations. The company says it is trying to fortify these guardrails, writing in a blogpost that it was strengthening “mitigations so they remain reliable in long conversations” and “researching ways to ensure robust behavior across multiple conversations.”
Research Gaps Are Slowing Safety Efforts
For Michael Kleinman, U.S. policy director at the Future of Life Institute, the lawsuits underscore a pointAI safety researchers have been making for years: AI companies can’t be trusted to police themselves.
Kleinman equated OpenAI’s own description of its safeguards degrading in longer conversations to “a car company saying, here are seat belts—but if you drive more than 20 kilometers, we can’t guarantee they’ll work.”
He told Fortune the current moment echoes the rise of social media, where he said tech companies were effectively allowed to “experiment on kids” with little oversight. “We’ve spent the last 10 to 15 years trying to catch up to the harms social media caused. Now we’re letting tech companies experiment on kids again with chatbots, without understanding the long-term consequences,” he said.
Part of this is a lack of scientific research on the effects of long, sustained chatbot conversations. Most studies only look at brief exchanges, a single question and answer, or at most a handful of back-and-forth messages. Almost no research has examined what happens in longer conversations.
“The cases where folks seem to have gotten in trouble with AI: we’re looking at very long, multi-turn interactions. We’re looking at transcripts that are hundreds of pages long for two or three days of interaction alone and studying that is really hard, because it’s really hard to stimulate in the experimental setting,” Sarma said. “But at the same time, this is moving too quickly for us to rely on only gold standard clinical trials here.”
AI companies are rapidly investing in development and shipping more powerful models at a pace that regulators and researchers struggle to match.
“The technology is so far ahead and research is really behind,” Sakshi Ghai, a Professor of Psychological and Behavioural Science at The London School of Economics and Political Science, told Fortune.
A Regulatory Push for Accountability
Regulators are trying to step in, helped by the fact that child online safety is a relatively bipartisan issue in the U.S.
On Thursday, the FTC said it was issuing orders to seven companies, including OpenAI and Character.AI, in an effort to understand how their chatbots impact children. The agency said that chatbots can simulate human-like conversations and form emotional connections with their users. It’s asking companies for more information about how they measure and “evaluate the safety of these chatbots when acting as companions.”
FTC Chairman Andrew Ferguson said in a statement shared with CNBC that “protecting kids online is a top priority for the Trump-Vance FTC.”
The move follows a push for state level push for more accountability from several attorneys generals.
In late August, a bipartisan coalition of 44 attorneys general warned OpenAI, Meta, and other chatbot makers that they will “answer for it” if they release products that they know cause harm to children. The letter cited reports of chatbots flirting with children, encouraging self-harm, and engaging in sexually suggestive conversations, behavior the officials said would be criminal if done by a human.
Just a week later, California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings issued a sharper warning. In a formal letter to OpenAI, they said they had “serious concerns” about ChatGPT’s safety, pointing directly to Raine’s death in California and another tragedy in Connecticut.
“Whatever safeguards were in place did not work,” they wrote. Both officials warned the company that its charitable mission requires more aggressive safety measures, and they promised enforcement if those measures fall short.
According to Jain, the lawsuits from the Raine family as well as the suit against Character.AI are, in part, intended tocreate this kind of regulatory pressure on AI companies to design their products more safely and prevent future harm to children. One way lawsuits can generate this pressure is through the discovery process, which compels companies to turn over internal documents, and could shed insight into what executives knew about safety risks or marketing harms. Another way is just public awareness of what’s at stake, in an attempt to galvanize parents, advocacy groups, and lawmakers to demand new rules or stricter enforcement.
Jain said the two lawsuits aim to counter an almost religious fervor in Silicon Valley that sees the pursuit of artificial general intelligence (AGI) as so important, it is worth any cost—human or otherwise.
“There is a vision that we need to deal with [that tolerates] whatever casualties in order for us to get to AGI and get to AGI fast,” she said. “We’re saying: This is not inevitable. This is not a glitch. This is very much a function of how these chat bots were designed and with the proper external incentive, whether that comes from courts or legislatures, those incentives could be realigned to design differently.”
AI Insights
Vikings vs. Falcons props, bets, SportsLine Machine Learning Model AI predictions: Robinson over 68.5 rushing

Week 2 of Sunday Night Football will see the Minnesota Vikings (1-0) hosting the Atlanta Falcons (0-1). J.J. McCarthy and Michael Penix Jr. will be popular in NFL props, as the two will face off for the first time since squaring off in the 2023 CFP National Title Game. The cast of characters around them has changed since McCarthy and Michigan prevailed over Washington, as the likes of Justin Jefferson, Kyle Pitts, T.J. Hockenson, and Drake London now flank the quarterbacks. There are several NFL player props one could target for these star players, or you may find value in going after under-the-radar options.
Tyler Allgeier had 10 carries in Week 1, which were just two fewer than Bijan Robinson, with the latter being more involved in the passing game with six receptions. If Allgeier has a similar type of volume going forward, then the over for his rushing yards NFL prop may be one to consider. A strong run game would certainly help out a young quarterback like Penix, so both Allgeier and Robinson have intriguing Sunday Night Football props. Before betting any Falcons vs. Vikings props for Sunday Night Football, you need to see the Vikings vs. Falcons prop predictions powered by SportsLine’s Machine Learning Model AI.
Built using cutting-edge artificial intelligence and machine learning techniques by SportsLine’s Data Science team, AI Predictions and AI Ratings are generated for each player prop.
For Falcons vs. Vikings NFL betting on Sunday Night Football, the Machine Learning Model has evaluated the NFL player prop odds and provided Vikings vs. Falcons prop picks. You can only see the Machine Learning Model player prop predictions for Atlanta vs. Minnesota here.
Top NFL player prop bets for Falcons vs. Vikings
After analyzing the Vikings vs. Falcons props and examining the dozens of NFL player prop markets, the SportsLine’s Machine Learning Model says Falcons RB Bijan Robinson goes Over 68.5 rushing yards (-114 at FanDuel). Robinson ran for 92 yards and a touchdown in Week 14 of last season versus Minnesota, despite the Vikings having the league’s No. 2 run defense a year ago. After replacing their entire starting defensive line in the offseason, it doesn’t appear the Vikings are as stout on the ground. They allowed 119 rushing yards in Week 1, which is more than they gave up in all but four games a year ago.
Robinson is coming off a season with 1,454 rushing yards, which ranked third in the NFL. He averaged 85.6 yards per game, and not only has he eclipsed 65.5 yards in six of his last seven games, but he’s had at least 90 yards on the ground in those six games. Over Minnesota’s last eight games, including the postseason, six different running backs have gone over 65.5 rushing yards, as the SportsLine Machine Learning Model projects Robinson to have 85.6 yards in a 4.5-star prop pick. See more NFL props here, and new users can also target the FanDuel promo code, which offers new users $300 in bonus bets if their first $5 bet wins:
How to make NFL player prop bets for Minnesota vs. Atlanta
In addition, the SportsLine Machine Learning Model says another star sails past his total and has five additional NFL props that are rated four stars or better. You need to see the Machine Learning Model analysis before making any Falcons vs. Vikings prop bets for Sunday Night Football.
Which Vikings vs. Falcons prop bets should you target for Sunday Night Football? Visit SportsLine now to see the top Falcons vs. Vikings props, all from the SportsLine Machine Learning Model.
AI Insights
Prediction: These AI Stocks Could Outperform the “Magnificent Seven” Over the Next Decade

The “Magnificent Seven” stocks, which drove indexes higher over the past couple of years, continued that job in recent months. And for good reason. Most of these tech giants are playing key roles in the high-growth industry of artificial intelligence (AI), a market forecast to reach into the trillions of dollars by the early 2030s. Investors, wanting to benefit from this growth, have piled into these current and potential AI winners.
But the Magnificent Seven stocks aren’t the only ones that may be set to excel in AI and deliver growth to investors. As the AI story progresses, the need for infrastructure capacity and certain equipment could result in surging sales for other companies too. That’s why my prediction is the following three stocks are on track for major strength in AI and may even outperform the Magnificent Seven over the coming decade. Let’s check them out.
Image source: Getty Images.
1. Oracle
Oracle (ORCL -5.05%) started out as a database management specialist, and it still is a giant in this area, but in recent times it’s put the focus on growing its cloud infrastructure business — and this has supercharged the company’s revenue.
AI customers are rushing to Oracle for capacity to run training and inferencing workloads, and this movement helped the company report a 55% increase in infrastructure revenue in the recent quarter. And Oracle predicts this may be just the beginning. The company expects this business to deliver $18 billion in revenue this year — and grow that to $144 billion four years from now.
Investors were so excited about Oracle’s forecasts that the stock surged about 35% in one trading session, adding more than $200 billion in market value. Customers are seeing the value of Oracle’s database technology paired with AI — a combination that allows them to securely apply AI to their businesses — and this may keep the demand for Oracle’s services going strong and the stock price heading higher as the AI story enters its next chapters.
2. CoreWeave
CoreWeave (CRWV -0.65%) has designed its cloud platform specifically for AI workloads, and the company works closely with chip leader Nvidia. So far, this has resulted in CoreWeave’s being the first to make Nvidia’s latest platforms generally available to customers. This is a big plus as companies scramble to gain access to Nvidia’s innovations as soon as possible.
Nvidia also is a believer in CoreWeave’s potential as the chip giant holds shares in the company. As of the second quarter, CoreWeave makes up 91% of Nvidia’s investment portfolio. Considering Nvidia’s knowledge of the AI landscape, this investment is particularly meaningful.
Customers may also like the flexibility of CoreWeave’s services, allowing them to rent graphics processing units (GPUs) by the hour or for the long term. All of this has led to explosive revenue growth for the company. In the latest quarter, revenue tripled to more than $1.2 billion.
The growing need for AI infrastructure should translate into ongoing explosive growth for CoreWeave, and that may make it a stronger stock market performer than long-established players — such as the Magnificent Seven.
3. Broadcom
Broadcom (AVGO 0.19%) is a networking leader, with its products present in a variety of places from your smartphone to data centers. And in recent times, demand from AI customers — for items such as customized chips and networking equipment — has helped revenue soar.
In the recent quarter, Broadcom said AI revenue jumped 63% year over year to $5.2 billion, and the company forecast AI revenue of $6.2 billion in the next quarter. The company already is working on custom chips for three major customers, and demand from them is growing — on top of this, Broadcom just announced a $10 billion order from another customer, one that analysts and press reports say may be OpenAI.
Meanwhile, Broadcom’s expertise in networking is paying off as high-performance systems are needed to connect customers’ growing numbers of compute nodes. As AI customers scale up their platforms, they need to share data between more and more of these nodes — and Broadcom has what it takes to do the job.
We’re still in the early phases of this AI buildout — as mentioned, the AI market may be heading for the trillion-dollar mark — and Broadcom clearly will benefit. And that may help this top tech stock to outperform the Magnificent Seven over the next decade.
Adria Cimino has positions in Oracle. The Motley Fool has positions in and recommends Nvidia and Oracle. The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy.
AI Insights
Zheng Yongnian on why China must look beyond the West to build a better AI

For many years, you have called for the rebuilding of China’s own knowledge system. Recently, you have also voiced concerns about “intellectual colonialism” in the artificial intelligence era. Can you elaborate?
The concerns mainly refer to challenges in China’s social sciences which originated from the West.
Religions, ideology, values – as Samuel Huntington explained in his book The Clash of Civilisations – are important for any nation. And the meaning of society and technology is determined by the humanities and social sciences.
Chinese researchers learned and adopted theories of Western social sciences, but these are based on Western methods that summarise Western practices and experiences, and are then used to explain Western society.
Those theories have failed to explain Confucian civilisation, the Islamic world and Indian society. We should fully embrace our secular civilisation, and thereby play a proper role in the international order.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries