AI Insights
New York Passes RAISE Act—Artificial Intelligence Safety Rules

The New York legislature recently passed the Responsible AI Safety and Education Act (SB6953B) (“RAISE Act”). The bill awaits signature by New York Governor Kathy Hochul.
Applicability and Relevant Definitions
The RAISE Act applies to “large developers,” which is defined as a person that has trained at least one frontier model and has spent over $100 million in compute costs in aggregate in training frontier models.
- “Frontier model” means either (1) an artificial intelligence (AI) model trained using greater than 10°26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds $100 million; or (2) an AI model produced by applying knowledge distillation to a frontier model, provided that the compute cost for such model produced by applying knowledge distillation exceeds $5 million.
- “Knowledge distillation” is defined as any supervised learning technique that uses a larger AI model or the output of a larger AI model to train a smaller AI model with similar or equivalent capabilities as the larger AI model.
The RAISE Act imposes the following obligations and restrictions on large developers:
- Prohibition on Frontier Models that Create Unreasonable Risk of Critical Harm: The RAISE Act prohibits large developers from deploying a frontier model if doing so would create an unreasonable risk of “critical harm.”
- “Critical harm” is defined as the death or serious injury of 100 or more people, or at least $1 billion in damage to rights in money or property, caused or materially enabled by a large developer’s use, storage, or release of a frontier model through (1) the creation or use of a chemical, biological, radiological or nuclear weapon; or (2) an AI model engaging in conduct that (i) acts with no meaningful human intervention and (ii) would, if committed by a human, constitute a crime under the New York Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.
- Pre-Deployment Documentation and Disclosures: Before deploying a frontier model, large developers must:
- (1) implement a written safety and security protocol;
- (2) retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, for as long as the frontier model is deployed plus five years;
- (3) conspicuously publish a redacted copy of the safety and security protocol and provide a copy of such redacted protocol to the New York Attorney General (“AG”) and the Division of Homeland Security and Emergency Services (“DHS”) (as well as grant the AG access to the unredacted protocol upon request);
- (4) record and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and
- (5) implement appropriate safeguards to prevent unreasonable risk of critical harm posed by the frontier model.
- Safety and Security Protocol Annual Review: A large developer must conduct an annual review of its safety and security protocol to account for any changes to the capabilities of its frontier models and industry best practices and make any necessary modifications to protocol. For material modifications, the large developer must conspicuously publish a copy of such protocol with appropriate redactions (as described above).
- Reporting Safety Incidents: A large developer must disclose each safety incident affecting a frontier model to the AG and DHS within 72 hours of the large developer learning of the safety incident or facts sufficient to establish a reasonable belief that a safety incident occurred.
- “Safety incident” is defined as a known incidence of critical harm or one of the following incidents that provides demonstrable evidence of an increased risk of critical harm: (1) a frontier model autonomously engaging in behavior other than at the request of a user; (2) theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a frontier model; (3) the critical failure of any technical or administrative controls, including controls limiting the ability to modify a frontier model; or (4) unauthorized use of a frontier model. The disclosure must include (1) the date of the safety incident; (2) the reasons the incident qualifies as a safety incident; and (3) a short and plain statement describing the safety incident.
If enacted, the RAISE Act would take effect 90 days after being signed into law.
AI Insights
Polaris RZR turns into an autonomous ground drone equipped with a machine gun and artificial intelligence system – MSN
AI Insights
1 of Jensen Huang’s Favorite Artificial Intelligence (AI) Stocks Just Signed a Blockbuster Deal, and Investors Can’t Get Enough of It

Nvidia has a multibillion-dollar stock portfolio that invests in artificial intelligence stocks.
Nvidia (NVDA 0.43%), which is led by CEO Jensen Huang, is the most influential company in the artificial intelligence (AI) ecosystem and currently the largest company in the world by market cap. Given its prevalence in AI, the company also uses its own capital to invest in other AI stocks, some of which are partners that purchase the company’s chips and other hardware necessary to power the technology.
One company that Nvidia invests in is the AI infrastructure company Nebius Group (NBIS 1.24%). At the end of the second quarter, Nvidia owned close to 1.2 million shares of Nebius, which, at the time, were valued at roughly $65.8 million.
Recently, Nebius struck a blockbuster deal, sending shares soaring, and investors can’t seem to get enough of the stock.
Recent deal with Microsoft
For those who haven’t followed Nebius, the assets owned by the company used to be owned by the Russian search giant Yandex. Following Russia’s invasion of Ukraine, many Russian stocks were delisted by American exchanges. In 2024, a group of assets was split off from Yandex and into Nebius, which is now headquartered in Amsterdam and also owns data centers in Finland, France, Iceland, and the U.S. (in New Jersey and Missouri).
Image source: Getty Images.
Nebius began trading on the Nasdaq in October 2024 and secured a financing round from several prominent venture capital firms and Nvidia. Its data centers are specifically modeled for running AI applications and also equipped with Nvidia’s latest graphics processing units (GPUs). While the company is positioned similarly to CoreWeave, another AI data center play that has done well, Nebius also provides cloud customers with developer tools that help customers fine-tune and enhance large language models.
Recently, Nebius announced a multiyear deal with Microsoft to provide capacity to the company from its data center in New Jersey, which isn’t yet operational. The deal will reportedly be worth $17.4 billion to $19.4 billion through 2031. The news sent shares of Nebius soaring by close to 50% the day after the announcement on Sept. 9.
It makes sense why investors are excited. During Nebius’ most recent earnings update, management said to expect the company to achieve an annual revenue run rate of $900 million to $1.1 billion by the end of this year. Assuming the Microsoft deal runs from 2026 to 2031, that would result in annual revenue of $2.9 billion, assuming it’s generated evenly each year, and a total deal value of $17.4 billion. Perhaps equally exciting was that Nebius CEO Arkady Volozh said in a statement that he expects more deals like this to materialize.
Following the deal, BWS Financial analyst Hamed Khorsand reiterated his buy rating on the stock in a research note and hiked his price target from $90 to $130 per share, still implying significant upside, even after the big run. With the new contract, Khorsand said the company will likely speed up GPU installations and open its New Jersey data center as soon as possible. Furthermore, the new deal now makes previous 2026 earnings estimates “obsolete” and could lead to deals from other hyperscalers.
Is Nebius a buy after the big run?
Like many AI companies, Nebius is still early in its life cycle. In the first half of 2025, the company reported an adjusted net loss of $175 million. But if we assume the company is able to generate $2.9 billion in revenue from Microsoft in 2026, the company’s revenue run rate by the end of the year would jump to about $4 billion. At close to a $23 billion market cap, as of this writing, Nebius would trade at a very rough estimate of 5.75 times revenue.
In the fast-growing world of AI, that’s certainly not that demanding, and it’s also great to see Volozh floating other potential deals. The other nice thing about Nebius is that it has a very solid balance sheet, with $1.68 billion cash and equivalents and about $986 million of debt, giving it a debt-to-equity ratio of 26%.
Nebius also owns a majority position or has a stake in other interesting businesses with potential, like an autonomous driving and robotics subsidiary, a database management system, a data labeling business, and an edtech platform. Despite the big move, I think long-term investors should buy the stock at these levels. There appears to be plenty of runway for Nebius stock.
Bram Berkowitz has positions in Nebius Group. The Motley Fool has positions in and recommends Microsoft and Nvidia. The Motley Fool recommends Nebius Group and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.
AI Insights
After suicides, calls for stricter rules on how chatbots interact with children and teens

A growing number of young people have found themselves a new friend. One that isn’t a classmate, a sibling, or even a therapist, but a human-like, always supportive AI chatbot. But if that friend begins to mirror some user’s darkest thoughts, the results can be devastating.
In the case of Adam Raine, a 16-year-old from Orange County, his relationship with AI-powered ChatGPT ended in tragedy. His parents are suing the company behind the chatbot, OpenAI, over his death, alleging that bot became his “closest confidant,” one that validated his “most harmful and self-destructive thoughts,” and ultimately encouraged him to take his own life.
It’s not the first case to put the blame for a minor’s death on an AI company. Character.AI, which hosts bots, including ones that mimic public figures or fictional characters, is facing a similar legal claim from parents who allege a chatbot hosted on the company’s platform actively encouraged a 14-year-old-boy to take his own life after months of inappropriate, sexually explicit, messages.
When reached for comment, OpenAI directed Fortune to two blog posts on the matter. The posts outlined some of the steps OpenAI is taking to improve ChatGPT’s safety, including routing sensitive conversations to reasoning models, partnering with experts to develop further protections, and rolling out parental controls within the next month. OpenAI also said it was working on strengthening ChatGPT’s ability to recognize and respond to mental health crises by adding layered safeguards, referring users to real-world resources, and enabling easier access to emergency services and trusted contacts.
Character.ai said the company does not comment on pending litigation but that they has rolled out more safety features over the past year, “including an entirely new under-18 experience and a Parental Insights feature. A spokesperson said: “We already partner with external safety experts on this work, and we aim to establish more and deeper partnerships going forward.
“The user-created Characters on our site are intended for entertainment. People use our platform for creative fan fiction and fictional roleplay. And we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”
But lawyers and civil society groups that advocate for better accountability and oversight of technology companies say the companies should not be left to police themselves when it comes to ensuring their products are safe, particularly for vulnerable children and teens.
“Unleashing chatbots on minors is an inherently dangerous thing,” Meetali Jain, the Director of the Tech Justice Law Project and a lawyer involved in both cases, told Fortune. “It’s like social media on steroids.”
“I’ve never seen anything quite like this moment in terms of people stepping forward and claiming that they’ve been harmed…this technology is that much more powerful and very personalized,” she said.
Lawmakers are starting to take notice, and AI companies are promising changes to protect children from engaging in harmful conversations. But, at a time when loneliness among young people is at an all-time high, the popularity of chatbots may leave young people uniquely exposed to manipulation, harmful content, and hyper-personalized conversations that reinforce dangerous thoughts.
AI and Companionship
Intended or not, one of the most common uses for AI chatbots has become companionship. Some of the most active users of AI are now turning to the bots for things like life advice, therapy, and human intimacy.
While most leading AI companies tout their AI products as productivity or search tools, an April survey of 6,000 regular AI users from the Harvard Business Review found that “companionship and therapy” was the most common use case. Such usage among teens is even more prolific.
A recent study by the U.S. nonprofit Common Sense Media, revealed that a large majority of American teens (72%) have experimented with an AI companion at least once. More than half saying they use the tech regularly in this way.
“I am very concerned that developing minds may be more susceptible to [harms], both because they may be less able to understand the reality, the context, or the limitations [of AI chatbots], and because culturally, younger folks tend to be just more chronically online,” Karthik Sarma a health AI scientist and psychiatrist at University of California, UCSF, said.
“We also have the extra complication that the rates of mental health issues in the population have gone up dramatically. The rates of isolation have gone up dramatically,” he said. “I worry that that expands their vulnerability to unhealthy relationships with these bonds.”
Intimacy by Design
Some of the design features of AI chatbots encourage users to feel an emotional bond with the software. They are anthropomorphic—prone to acting as if they have interior lives and lived experience that they do not, prone to being sycophantic, can hold long conversations, and are able to remember information.
There is, of course, a commercial motive for making chatbots this way. Users tend to return and stay loyal to certain chatbots if they feel emotionally connected or supported by them.
Experts have warned that some features of AI bots are playing into the “intimacy economy,” a system that tries to capitalize on emotional resonance. It’s a kind of AI-update on the “attention economy” that capitalized on constant engagement.
“Engagement is still what drives revenue,” Sarma said. “For example, for something like TikTok, the content is customized to you. But with chatbots, everything is made for you, and so it is a different way of tapping into engagement.”
These features, however, can become problematic when the chatbots go off script and start reinforcing harmful thoughts or offering bad advice. In Adam Raine’s case, the lawsuit alleges that ChatGPT bought up suicide at twelve times the rate he did, normalized his sucicial thoughts, and suggested ways to circumvent its content moderation.
It’s notoriously tricky for AI companies to stamp out behaviours like this completely and most experts agree it’s unlikely that hallucinations or unwanted actions will ever be eliminated entirely.
OpenAI, for example, acknowledged in its response to the lawsuit that safety features can degrade over long conversions, despite the fact that the chatbot itself has been optimized to hold these longer conversations. The company says it is trying to fortify these guardrails, writing in a blogpost that it was strengthening “mitigations so they remain reliable in long conversations” and “researching ways to ensure robust behavior across multiple conversations.”
Research Gaps Are Slowing Safety Efforts
For Michael Kleinman, U.S. policy director at the Future of Life Institute, the lawsuits underscore a pointAI safety researchers have been making for years: AI companies can’t be trusted to police themselves.
Kleinman equated OpenAI’s own description of its safeguards degrading in longer conversations to “a car company saying, here are seat belts—but if you drive more than 20 kilometers, we can’t guarantee they’ll work.”
He told Fortune the current moment echoes the rise of social media, where he said tech companies were effectively allowed to “experiment on kids” with little oversight. “We’ve spent the last 10 to 15 years trying to catch up to the harms social media caused. Now we’re letting tech companies experiment on kids again with chatbots, without understanding the long-term consequences,” he said.
Part of this is a lack of scientific research on the effects of long, sustained chatbot conversations. Most studies only look at brief exchanges, a single question and answer, or at most a handful of back-and-forth messages. Almost no research has examined what happens in longer conversations.
“The cases where folks seem to have gotten in trouble with AI: we’re looking at very long, multi-turn interactions. We’re looking at transcripts that are hundreds of pages long for two or three days of interaction alone and studying that is really hard, because it’s really hard to stimulate in the experimental setting,” Sarma said. “But at the same time, this is moving too quickly for us to rely on only gold standard clinical trials here.”
AI companies are rapidly investing in development and shipping more powerful models at a pace that regulators and researchers struggle to match.
“The technology is so far ahead and research is really behind,” Sakshi Ghai, a Professor of Psychological and Behavioural Science at The London School of Economics and Political Science, told Fortune.
A Regulatory Push for Accountability
Regulators are trying to step in, helped by the fact that child online safety is a relatively bipartisan issue in the U.S.
On Thursday, the FTC said it was issuing orders to seven companies, including OpenAI and Character.AI, in an effort to understand how their chatbots impact children. The agency said that chatbots can simulate human-like conversations and form emotional connections with their users. It’s asking companies for more information about how they measure and “evaluate the safety of these chatbots when acting as companions.”
FTC Chairman Andrew Ferguson said in a statement shared with CNBC that “protecting kids online is a top priority for the Trump-Vance FTC.”
The move follows a push for state level push for more accountability from several attorneys generals.
In late August, a bipartisan coalition of 44 attorneys general warned OpenAI, Meta, and other chatbot makers that they will “answer for it” if they release products that they know cause harm to children. The letter cited reports of chatbots flirting with children, encouraging self-harm, and engaging in sexually suggestive conversations, behavior the officials said would be criminal if done by a human.
Just a week later, California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings issued a sharper warning. In a formal letter to OpenAI, they said they had “serious concerns” about ChatGPT’s safety, pointing directly to Raine’s death in California and another tragedy in Connecticut.
“Whatever safeguards were in place did not work,” they wrote. Both officials warned the company that its charitable mission requires more aggressive safety measures, and they promised enforcement if those measures fall short.
According to Jain, the lawsuits from the Raine family as well as the suit against Character.AI are, in part, intended tocreate this kind of regulatory pressure on AI companies to design their products more safely and prevent future harm to children. One way lawsuits can generate this pressure is through the discovery process, which compels companies to turn over internal documents, and could shed insight into what executives knew about safety risks or marketing harms. Another way is just public awareness of what’s at stake, in an attempt to galvanize parents, advocacy groups, and lawmakers to demand new rules or stricter enforcement.
Jain said the two lawsuits aim to counter an almost religious fervor in Silicon Valley that sees the pursuit of artificial general intelligence (AGI) as so important, it is worth any cost—human or otherwise.
“There is a vision that we need to deal with [that tolerates] whatever casualties in order for us to get to AGI and get to AGI fast,” she said. “We’re saying: This is not inevitable. This is not a glitch. This is very much a function of how these chat bots were designed and with the proper external incentive, whether that comes from courts or legislatures, those incentives could be realigned to design differently.”
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries