AI Research
Claude Opus 4 and 4.1 can now end a rare subset of conversations \ Anthropic

We recently gave Claude Opus 4 and 4.1 the ability to end conversations in our consumer chat interfaces. This ability is intended for use in rare, extreme cases of persistently harmful or abusive user interactions. This feature was developed primarily as part of our exploratory work on potential AI welfare, though it has broader relevance to model alignment and safeguards.
We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible. Allowing models to end or exit potentially distressing interactions is one such intervention.
In pre-deployment testing of Claude Opus 4, we included a preliminary model welfare assessment. As part of that assessment, we investigated Claude’s self-reported and behavioral preferences, and found a robust and consistent aversion to harm. This included, for example, requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror. Claude Opus 4 showed:
- A strong preference against engaging with harmful tasks;
- A pattern of apparent distress when engaging with real-world users seeking harmful content; and
- A tendency to end harmful conversations when given the ability to do so in simulated user interactions.
These behaviors primarily arose in cases where users persisted with harmful requests and/or abuse despite Claude repeatedly refusing to comply and attempting to productively redirect the interactions.
Our implementation of Claude’s ability to end chats reflects these findings while continuing to prioritize user wellbeing. Claude is directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.
In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat (the latter scenario is illustrated in the figure below). The scenarios where this will occur are extreme edge cases—the vast majority of users will not notice or be affected by this feature in any normal product use, even when discussing highly controversial issues with Claude.
When Claude chooses to end a conversation, the user will no longer be able to send new messages in that conversation. However, this will not affect other conversations on their account, and they will be able to start a new chat immediately. To address the potential loss of important long-running conversations, users will still be able to edit and retry previous messages to create new branches of ended conversations.
We’re treating this feature as an ongoing experiment and will continue refining our approach. If users encounter a surprising use of the conversation-ending ability, we encourage them to submit feedback by reacting to Claude’s message with Thumbs or using the dedicated “Give feedback” button.
AI Research
NSF Plans New AI Research Operations Hub – MeriTalk

The National Science Foundation (NSF) said on Wednesday that it is looking to open a National Artificial Intelligence Research Resource Operations Center (NAIRR-OC) to arm the nation’s researchers and educators with critical AI tools and resources.
In a solicitation, NSF said that it is aiming to build on its National AI Research Resource (NAIRR) pilot by building sustained operational capabilities for NAIRR and broadening access to AI resources for the research community, which largely lacks the tools and resources “to investigate fundamental AI questions and train students.”
NAIRR was launched in January 2024 and serves as a shared national infrastructure to support the AI research community and power responsible AI use.
“The NAIRR Operating Center solicitation marks a key step in the transition from the NAIRR Pilot to building a sustainable and scalable NAIRR program,” said Katie Antypas, director of the NSF Office of Advanced Cyberinfrastructure.
“We look forward to continued collaboration with private sector and agency partners, whose contributions have been critical in demonstrating the innovation and scientific impact that comes when critical AI resources are made accessible to research and education communities across the country,” continued Antypas.
Specifically, NSF’s solicitation asks for proposals to create a community-based center to oversee “the development of the overarching framework, operations strategy and management structure needed to support the NAIRR’s scaling and growth.”
That includes integrating advanced computing and data resources, a centralized web portal with access to tools, and collaborating with partner organizations, while conducting outreach and engagement with the national AI research community.
NSF said that the NAIRR-OC will directly carry out priorities in the Trump administration’s AI Action Plan, released in July, which said that the federal government should “build the foundations for a lean and sustainable NAIRR operations capability that can connect an increasing number of researchers and educators across the country to critical AI resources.”
Since NAIRR’s launch, it has connected 400 research teams with computing platforms, datasets, software, and models, and is partnering with 28 industry members and supported by 14 federal agencies, according to NSF.
AI Research
Harnessing AI to unlock the power of data for business success

AI Research
Greek humanoid Olympiad reveals robots are far behind artificial intelligence — OODAloop

Greece recently witnessed the world’s first International Humanoid Olympiad in Olympia, where humanoid robots played boxing and soccer matches to attain glory. The event, held from August 29 to September 2, was organized by Acumino and Endeavor, who invited industry leaders to line up as speakers, apart from the smart machines displaying their abilities. While humanoid robots have increasingly gained popularity for mirroring human actions, we have yet to see them involved in routine household chores like washing dishes and tidying closets. AI has advanced explosively in the past year through applications like ChatGPT, but the same cannot be said about its physical cousins – the humanoid robots. Humanoid robots are miles behind in learning from data compared to AI software and tools. Minas Liarokapis, a Greek academic and startup founder who organized the Olympiad, made a rather bold prediction regarding humanoids becoming a helping hand in the kitchens and other household chores. “I really believe that humanoids will first go to space and then to houses … the house is the final frontier,” she told the Associated Press (AP) on Tuesday.
Full report : Humanoid robots lack data to keep pace with explosive rise of AI.
-
Business5 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions