AI Research
a New Standard for Trustworthy AI Agents in Science

The complete ecosystem includes AI agents, benchmarks, and tools to bring clarity and credibility to the scientific AI space
SEATTLE, August 26, 2025–(BUSINESS WIRE)–Ai2 (The Allen Institute for AI) today launched Asta, an integrated, open ecosystem designed to transform how science is done with AI agents. At a time when AI tools are flooding the research landscape—often opaque, untested, and unproven—Asta offers a principled alternative: a comprehensive collection that includes an agentic AI research assistant, the first rigorous benchmark suite for scientific agents, and a set of developer resources for building trustworthy tools.
Together, these components form a foundation for high-performance scientific AI that is transparent, evidence-based, and designed to earn the trust of scientists, developers, and institutions.
“AI can be transformative for science, but only if it’s held to the same standards as science itself,” said Ali Farhadi, CEO of Ai2. “With Asta, we’re not just building an assistant but an ecosystem built on transparency, reproducibility, and scientific rigor. It’s designed for real researchers solving real problems—and developers creating the next generation of agentic tools to accelerate scientific discoveries. It’s a bet on a future where AI doesn’t just keep up with science, it helps drive it forward.”
Asta: A New Kind of Research Partner
At its core, Asta is an intelligent, open-source AI assistant designed specifically for scientists. Unlike general-purpose tools, Asta understands the needs of research workflows. It doesn’t just retrieve information, it reviews literature, synthesizes evidence, and (in beta) analyzes data—all while providing citations.
Already in use by researchers at 194 institutions including the University of Chicago and the University of Washington, Asta is accelerating real-world discovery—from identifying therapeutic targets to exploring new areas of inquiry.
“More than ever before, researchers struggle with literature search and synthesis,” said James Evans, Director of the Knowledge Lab at the University of Chicago. “Ai2’s Asta ecosystem of AI agents, benchmarks, and tools helps to break these barriers. Its system is poised to accelerate the path from hunch to insight, transforming how we navigate the vast landscape of scientific understanding.”
A Fully Integrated Ecosystem for Scientific AI
Asta isn’t a standalone tool. It’s a full-stack ecosystem designed to support the entire lifecycle of scientific AI development and use:
AI Research
Will artificial intelligence fuel moral chaos or positive change?

Artificial intelligence is transforming our world at an unprecedented rate, but what does this mean for Christians, morality and human flourishing?
In this episode of “The Inside Story,” Billy Hallowell sits down with The Christian Post’s Brandon Showalter to unpack the promises and perils of AI.
From positives like Bible translation to fears over what’s to come, they explore how believers can apply a biblical worldview to emerging technology, the dangers of becoming “subjects” of machines, and why keeping Christ at the center is the only true safeguard.
Plus, learn about The Christian Post’s upcoming “AI for Humanity” event at Colorado Christian University and how you can join the conversation in person or via livestream:
“The Inside Story” takes you behind the headlines of the biggest faith, culture and political headlines of the week. In 15 minutes or less, Christian Post staff writers and editors will help you navigate and understand what’s driving each story, the issues at play — and why it all matters.
Listen to more Christian podcasts today on the Edifi app — and be sure to subscribe to The Inside Story on your favorite platforms:
AI Research
BNY and Carnegie Mellon University announce five-year $10 million partnership supporting AI research — EdTech Innovation Hub

The $10 million deal aims to bring students, faculty and staff together alongside BNY experts to advance AI applications and systems to prepare the next generation of leaders.
Known as the BNY AI Lab, the collaboration will focus on technologies and frameworks that can ensure robust governance of mission-critical AI applications.
“As AI drives productivity, unlocks growth and transforms industries, Pittsburgh has cemented its role as a global hub for innovation and talent, reinforcing Pennsylvania’s leadership in shaping the broader AI ecosystem,” comments Robin Vince, CEO at BNY. “Building on BNY’s 150-year legacy in the Commonwealth, we are proud to expand our work with Carnegie Mellon University to help attract world-class talent and pioneer AI research with an impact far beyond the region.”
A dedicated space for the collaboration will be created at the University’s Pittsburgh campus during the 2025-26 academic year.
“AI has emerged as one of the single most important intellectual developments of our time, and it is rapidly expanding into every sector of our economy,” adds Farnam Jahanian, President of Carnegie Mellon. “Carnegie Mellon University is thrilled to collaborate with BNY – a global financial services powerhouse – to responsibly develop and scale emerging AI technologies and democratize their impact for the benefit of industry and society at large.”
The ETIH Innovation Awards 2026
The EdTech Innovation Hub Awards celebrate excellence in global education technology, with a particular focus on workforce development, AI integration, and innovative learning solutions across all stages of education.
Now open for entries, the ETIH Innovation Awards 2026 recognize the companies, platforms, and individuals driving transformation in the sector, from AI-driven assessment tools and personalized learning systems, to upskilling solutions and digital platforms that connect learners with real-world outcomes.
Submissions are open to organizations across the UK, the Americas, and internationally. Entries should highlight measurable impact, whether in K–12 classrooms, higher education institutions, or lifelong learning settings.
Winners will be announced on 14 January 2026 as part of an online showcase featuring expert commentary on emerging trends and standout innovation. All winners and finalists will also be featured in our first print magazine, to be distributed at BETT 2026.
AI Research
Beyond Refusal — Constructive Safety Alignment for Responsible Language Models

View a PDF of the paper titled Oyster-I: Beyond Refusal — Constructive Safety Alignment for Responsible Language Models, by Ranjie Duan and 26 other authors
Abstract:Large language models (LLMs) typically deploy safety mechanisms to prevent harmful content generation. Most current approaches focus narrowly on risks posed by malicious actors, often framing risks as adversarial events and relying on defensive refusals. However, in real-world settings, risks also come from non-malicious users seeking help while under psychological distress (e.g., self-harm intentions). In such cases, the model’s response can strongly influence the user’s next actions. Simple refusals may lead them to repeat, escalate, or move to unsafe platforms, creating worse outcomes. We introduce Constructive Safety Alignment (CSA), a human-centric paradigm that protects against malicious misuse while actively guiding vulnerable users toward safe and helpful results. Implemented in Oyster-I (Oy1), CSA combines game-theoretic anticipation of user reactions, fine-grained risk boundary discovery, and interpretable reasoning control, turning safety into a trust-building process. Oy1 achieves state-of-the-art safety among open models while retaining high general capabilities. On our Constructive Benchmark, it shows strong constructive engagement, close to GPT-5, and unmatched robustness on the Strata-Sword jailbreak dataset, nearing GPT-o1 levels. By shifting from refusal-first to guidance-first safety, CSA redefines the model-user relationship, aiming for systems that are not just safe, but meaningfully helpful. We release Oy1, code, and the benchmark to support responsible, user-centered AI.
Submission history
From: Ranjie Duan [view email]
[v1]
Tue, 2 Sep 2025 03:04:27 UTC (5,745 KB)
[v2]
Thu, 4 Sep 2025 11:54:06 UTC (5,745 KB)
[v3]
Mon, 8 Sep 2025 15:18:35 UTC (5,746 KB)
[v4]
Fri, 12 Sep 2025 04:23:22 UTC (5,747 KB)
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries