AI Insights
New York Enacts Artificial Intelligence Companion Mental Health L

Key Takeaways:
- New York is the first state to enact mental health-focused statutory provisions for “AI Companions,” requiring user disclosures and suicide prevention measures for emotionally interactive AI systems.
- Other states are exploring similar approaches, with laws targeting compulsive use, requiring suicide prevention protocols or mandating user awareness of AI-human distinctions.
- Organizations must assess their AI risk to ensure compliance with the myriad laws and statutory provisions governing AI systems.
New York, as part of its state budget process, enacted in May 2025 new statutory provisions for “AI Companions” that highlight an emerging desire to monitor and safeguard the mental health of AI tool or system users. It aligns with a broader regulatory awareness of the mental health risks involved in AI interactions and the desire to safeguard vulnerable AI users, particularly minors or those experiencing mental health crises like suicidal ideation.
An Emerging Desire to Safeguard Mental Health in an AI-Enabled World
Regulators are increasingly aware of the mental health risks involved in AI interactions and seeking ways to safeguard vulnerable users. These risks were brought into sharp focus with the death of a 14-year-old Florida teenager, Sewell Setzer, who committed suicide after forming a romantic and emotional relationship with an AI chatbot and allegedly informing the chatbot that he was thinking about suicide, which has resulted in a closely watched lawsuit regarding the chatbot’s role in his death.
States have considered a variety of techniques to regulate this space, ranging from user disclosures to safety measures. Utah’s law on mental health chatbots (H.B. 452), for example, imposes advertisement restrictions and requires certain disclosures to ensure users are aware they are interacting with an AI rather than a human being. Other states, like California (via SB 243), are considering design mandates like banning reward systems that encourage compulsive use and requiring suicide prevention measures within any AI chatbots that are being marketed as emotional buddies. Currently, NY is the only state that has enacted safety-focused measures (like suicide prevention) around AI companionship.
NY’s Approach to Embedding Mental Health Safeguards in AI
NY’s new statutory provisions (which go into effect on November 5, 2025) focus on AI systems that retain user information and preferences from prior interactions to engage in human-like conversation with their users.
These systems, termed “AI Companions,” are characterized by their ability to sustain ongoing conversations about personal matters, including topics typically found in friendships or emotionally supportive interactions. That means chatbots, digital wellness tools, mental health apps or even productivity assistants with emotionally aware features could fall within the scope of AI Companions depending on how they interact with users, although interactive AI systems used strictly for customer service, international operations, research and/or productivity optimization are excluded.
The law seeks to drive consumer awareness and prevent suicide and other forms of self-harm by mandating such AI systems (1) affirmatively notify users they are not interacting with a human and (2) take measures to prevent self-harm. Operators must provide clear and conspicuous notifications at the start of any interaction (and every three hours for long and ongoing interactions) to ensure users are aware they’re not interacting with a human. Operators must also ensure the AI system has reasonable protocols to detect suicidal ideation or expressions of self-harm expressed by a user and refer them to crisis service providers like the 988 Suicide Prevention and Behavioral Health Crisis Hotline whenever such expressions are detected.
Assessing AI Regulatory Risk
Whether in the context of chatbots, wellness apps, education platforms or AI-driven social tools, regulators are increasingly focused on systems that engage deeply with users. Because these systems may be uniquely positioned to detect warning signs like expressions of hopelessness, isolation or suicidal ideation, it’s likely that other states will follow NY in requiring certain AI systems to identify, respond to or otherwise escalate signals of mental health distress to protect vulnerable populations like minors.
NY’s new AI-related mental health provisions also showcase how U.S. laws and statutory provisions around AI heavily focus on how the technology is being used. In other words, your use case determines your risk. To effectively navigate the patchwork of AI-related laws and statutory provisions in the U.S. — of which there are over 100 state laws currently — organizations must evaluate each AI use case to identify their compliance risks and obligations.
Polsinelli offers an AI risk assessment that enables organizations to do exactly that. Understanding your AI risks is your first line of defense — and a powerful business enabler. Let us help you evaluate whether your AI use case falls within use case or industry-specific laws like NY’s “AI Companion” law or industry-agnostic ones like Colorado’s AI Act, so you can deploy innovative business tools and solutions with confidence.
AI Insights
OpenAI says spending to rise to $115 billion through 2029: Information

OpenAI Inc. told investors it projects its spending through 2029 may rise to $115 billion, about $80 billion more than previously expected, The Information reported, without providing details on how and when shareholders were informed.
OpenAI is in the process of developing its own data center server chips and facilities to drive the technologies, in an effort to control cloud server rental expenses, according to the report.
The company predicted it could spend more than $8 billion this year, roughly $1.5 billion more than an earlier projection, The Information said.
Another factor influencing the increased need for capital is computing costs, on which the company expects to spend more than $150 billion from 2025 through 2030.
The cost to develop AI models is also higher than previously expected, The Information said.
AI Insights
Microsoft Says Azure Service Affected by Damaged Red Sea Cables

Microsoft Corp. said on Saturday that clients of its Azure cloud platform may experience increased latency after multiple international cables in the Red Sea were cut.
Source link
AI Insights
Geoffrey Hinton says AI will cause massive unemployment and send profits soaring

Pioneering computer scientist Geoffrey Hinton, whose work has earned him a Nobel Prize and the moniker “godfather of AI,” said artificial intelligence will spark a surge in unemployment and profits.
In a wide-ranging interview with the Financial Times, the former Google scientist cleared the air about why he left the tech giant, raised alarms on potential threats from AI, and revealed how he uses the technology. But he also predicted who the winners and losers will be.
“What’s actually going to happen is rich people are going to use AI to replace workers,” Hinton said. “It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”
That echos comments he gave to Fortune last month, when he said AI companies are more concerned with short-term profits than the long-term consequences of the technology.
For now, layoffs haven’t spiked, but evidence is mounting that AI is shrinking opportunities, especially at the entry level where recent college graduates start their careers.
A survey from the New York Fed found that companies using AI are much more likely to retrain their employees than fire them, though layoffs are expected to rise in the coming months.
Hinton said earlier that healthcare is the one industry that will be safe from the potential jobs armageddon.
“If you could make doctors five times as efficient, we could all have five times as much health care for the same price,” he explained on the Diary of a CEO YouTube series in June. “There’s almost no limit to how much health care people can absorb—[patients] always want more health care if there’s no cost to it.”
Still, Hinton believes that jobs that perform mundane tasks will be taken over by AI, while sparing some jobs that require a high level of skill.
In his interview with the FT, he also dismissed OpenAI CEO Sam Altman’s idea to pay a universal basic income as AI disrupts the economy and reduce demand for workers, saying it “won’t deal with human dignity” and the value people derive from having jobs.
Hinton has long warned about the dangers of AI without guardrails, estimating a 10% to 20% chance of the technology wiping out humans after the development of superintelligence.
In his view, the dangers of AI fall into two categories: the risk the technology itself poses to the future of humanity, and the consequences of AI being manipulated by people with bad intent.
In his FT interview, he warned AI could help someone build a bioweapon and lamented the Trump administration’s unwillingness to regulate AI more closely, while China is taking the threat more seriously. But he also acknowledged potential upside from AI amid its immense possibilities and uncertainties.
“We don’t know what is going to happen, we have no idea, and people who tell you what is going to happen are just being silly,” Hinton said. “We are at a point in history where something amazing is happening, and it may be amazingly good, and it may be amazingly bad. We can make guesses, but things aren’t going to stay like they are.”
Meanwhile, he told the FT how he uses AI in his own life, saying OpenAI’s ChatGPT is his product of choice. While he mostly uses the chatbot for research, Hinton revealed that a former girlfriend used ChatGPT “to tell me what a rat I was” during their breakup.
“She got the chatbot to explain how awful my behavior was and gave it to me. I didn’t think I had been a rat, so it didn’t make me feel too bad . . . I met somebody I liked more, you know how it goes,” he quipped.
Hinton also explained why he left Google in 2023. While media reports have said he quit so he could speak more freely about the dangers of AI, the 77-year-old Nobel laureate denied that was the reason.
“I left because I was 75, I could no longer program as well as I used to, and there’s a lot of stuff on Netflix I haven’t had a chance to watch,” he said. “I had worked very hard for 55 years, and I felt it was time to retire . . . And I thought, since I am leaving anyway, I could talk about the risks.”
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi