Connect with us

Events & Conferences

Physics-constrained machine learning for scientific computing

Published

on


Commercial applications of deep learning have been making headlines for years — never more so than this spring. More surprisingly, deep-learning methods have also shown promise for scientific computing, where they can be used to predict solutions to partial differential equations (PDEs). These equations are often prohibitively expensive to solve numerically; using data-driven methods has the potential to transform both scientific and engineering applications of scientific computing, including aerodynamics, ocean and climate, and reservoir modeling.

A fundamental challenge is that the predictions of deep-learning models trained on physical data typically ignore fundamental physical principles. Such models might, for instance, violate system conservation laws: the solution to a heat transfer problem may fail to conserve energy, or the solution to a fluid flow problem may fail to conserve mass. Similarly, a model’s solution may violate boundary conditions — say, allowing heat flow through an insulator at the boundary of a physical system. This can happen even when the model’s training data includes no such violations: at inference time, the model may simply extrapolate from patterns in the training data in an illicit way.

In a pair of recent papers accepted at the International Conference on Machine Learning (ICML) and the International Conference on Learning Representations (ICLR), we investigate the problems of adding known physics constraints to the predictive outputs of machine learning (ML) models when computing the solutions to PDEs.

Related content

Danielle Maddix Robinson’s mathematics background helps inform robust models that can predict everything from retail demand to epidemiology.

The ICML paper, “Learning physical models that can respect conservation laws”, which we will present in July, focuses on satisfying conservation laws with black-box models. We show that, for certain types of challenging PDE problems with propagating discontinuities, known as shocks, our approach to constraining model outputs works better than its predecessors: it more sharply and accurately captures the physical solution and its uncertainty and yields better performance on downstream tasks.

In this paper, we collaborated with Derek Hansen, a PhD student in the Department of Statistics at the University of Michigan, who was an intern at AWS AI Labs at the time, and Michael Mahoney, an Amazon Scholar in Amazon’s Supply Chain Optimization Technologies organization and a professor of statistics at the University of California, Berkeley.

In a complementary paper we presented at this year’s ICLR, “Guiding continuous operator learning through physics-based boundary constraints”, we, together with Nadim Saad, an AWS AI Labs intern at the time and a PhD student at the Institute for Computational and Mathematical Engineering (ICME) at Stanford University, focus on enforcing physics through boundary conditions. The modeling approach we describe in this paper is a so-called constrained neural operator, and it exhibits up to a 20-fold performance improvement over previous operator models.

So that scientists working with models of physical systems can benefit from our work, we’ve released the code for the models described in both papers (conservation laws | boundary constraints) on GitHub. We also presented on both works in March 2023 at AAAI’s symposium on Computational Approaches to Scientific Discovery.

Danielle Maddix Robinson on physics-constrained machine learning for scientific computing

A talk presented in April 2023 at the Machine Learning and Dynamical Systems Seminar at the Alan Turing Institute.

Conservation laws

Recent work in scientific machine learning (SciML) has focused on incorporating physical constraints into the learning process as part of the loss function. In other words, the physical information is treated as a soft constraint or regularization.

Related content

Hybrid model that combines machine learning with differential equations outperforms models that use either strategy by itself.

A main issue with these approaches is that they do not guarantee that the physical property of conservation is satisfied. To address this issue, in “Learning physical models that can respect conservation laws”, we propose ProbConserv, a framework for incorporating constraints into a generic SciML architecture. Instead of expressing conservation laws in the differential forms of PDEs, which are commonly used in SciML as extra terms in the loss function, ProbConserv converts them into their integral form. This allows us to use ideas from finite-volume methods to enforce conservation.

In finite-volume methods, a spatial domain — say, the region through which heat is propagating — is discretized into a finite set of smaller volumes called control volumes. The method maintains the balance of mass, energy, and momentum throughout this domain by applying the integral form of the conservation law locally across each control volume. Local conservation requires that the out-flux from one volume equals the in-flux to an adjacent volume. By enforcing the conservation law across each control volume, the finite-volume method guarantees global conservation across the whole domain, where the rate of change of the system’s total mass is given by the change in fluxes along the domain boundaries.

The integral form of a conservation law states that the rate of change of the total mass of the system over a domain (Ω) is equal to the difference between the in-flux and out-flux along the domain boundaries (∂Ω).

More specifically, the first step in the ProbConserv method is to use a probabilistic machine learning model — such as a Gaussian process, attentive neural process (ANP), or ensembles of neural-network models — to estimate the mean and variance of the outputs of the physical model. We then use the integral form of the conservation law to perform a Bayesian update to the mean and covariance of the distribution of the solution profile such that it satisfies the conservation constraint exactly in the limit.

Related content

Learning the complete quantile function, which maps probabilities to variable values, rather than building separate models for each quantile level, enables better optimization of resource trade-offs.

In the paper, we provide a detailed analysis of ProbConserv’s application to the generalized porous-medium equation (GPME), a widely used parameterized family of PDEs. The GPME has been used in applications ranging from underground flow transport to nonlinear heat transfer to water desalination and beyond. By varying the PDE parameters, we can describe PDE problems with different levels of complexity, ranging from “easy” problems, such as parabolic PDEs that model smooth diffusion processes, to “hard” nonlinear hyperbolic-like PDEs with shocks, such as the Stefan problem, which has been used to model two-phase flow between water and ice, crystal growth, and more complex porous media such as foams.

For easy GPME variants, ProbConserv compares well to state-of-the-art competitors, and for harder GPME variants, it outperforms other ML-based approaches that do not guarantee volume conservation. ProbConserv seamlessly enforces physical conservation constraints, maintains probabilistic uncertainty quantification (UQ), and deals well with the problem of estimating shock propagation, which is difficult given ML models’ bias toward smooth and continuous behavior. It also effectively handles heteroskedasticity, or fluctuation in variables’ standard deviations. In all cases, it achieves superior predictive performance on downstream tasks, such as predicting shock location, which is a challenging problem even for advanced numerical solvers.

Examples

Conservation of mass can be violated by a black-box deep-learning model (here, the ANP), even when the PDE is applied as a soft constraint (here, SoftC-ANP) on the loss function, à la physics-informed neural networks (PINNs). This figure shows the variation of total mass over time for the smooth constant coefficient diffusion equation (an “easy” GPME example). The true mass remains zero, since there is zero net flux from the domain boundaries, and thus mass cannot be created or destroyed in the domain interior.

Density solution profiles with uncertainty quantification. In the “hard” version of the GPME problem, also known as the Stefan problem, the solution profile may contain a moving sharp interface in space, known as a shock. The shock here separates the region with fluid from the degenerate one with zero fluid density. The uncertainty is largest in the shock region and becomes smaller in the areas away from it. The main idea behind ProbConserv’s UQ method is to use the uncertainty in the unconstrained black box to modify the mean and covariance at the locations where the variance is largest, to satisfy the conservation constraint. The constant-variance assumption in the HardC-ANP baseline does not result in improvement on this hard task, while ProbConserv results in a better estimate of the solution at the shock and a threefold improvement in the mean squared error (MSE).

Downstream task. Histogram of the posterior of the shock position computed by ProbConserv and the other baselines. While the baseline models skew the distribution of the shock position, ProbConserv computes a distribution that is well-centered around the true shock position. This illustrates that enforcing physical constraints such as conservation is necessary in order to provide reliable and accurate estimations of the shock position.

Boundary conditions

Boundary conditions (BCs) are physics-enforced constraints that solutions of PDEs must satisfy at specific spatial locations. These constraints carry important physical meaning and guarantee the existence and the uniqueness of PDE solutions. Current deep-learning-based approaches that aim to solve PDEs rely heavily on training data to help models learn BCs implicitly. There is no guarantee, though, that these models will satisfy the BCs during evaluation. In our ICLR 2023 paper, “Guiding continuous operator learning through physics-based boundary constraints”, we propose an efficient, hard-constrained, neural-operator-based approach to enforcing BCs.

Related content

Amazon quantum computing scientist recognized for ‘outstanding contributions to physics’.

Where most SciML methods (for example, PINNs) parameterize the solution to PDEs with a neural network, neural operators aim to learn the mapping from PDE coefficients or initial conditions to solutions. At the core of every neural operator is a kernel function, formulated as an integral operator, that describes the evolution of a physical system over time. For our study, we chose the Fourier neural operator (FNO) as an example of a kernel-based neural operator.

We propose a model we call the boundary-enforcing operator network (BOON). Given a neural operator representing a PDE solution, a training dataset, and prescribed BCs, BOON applies structural corrections to the neural operator to ensure that the predicted solution satisfies the system BCs.

BOON architectures. Kernel correction architectures for commonly used Dirichlet, Neumann, and periodic boundary conditions that carry different physical meanings.

We provide our refinement procedure and demonstrate that BOON’s solutions satisfy physics-based BCs, such as Dirichlet, Neumann, and periodic. We also report extensive numerical experiments on a wide range of problems including the heat and wave equations and Burgers’s equation, along with the challenging 2-D incompressible Navier-Stokes equations, which are used in climate and ocean modeling. We show that enforcing these physical constraints results in zero boundary error and improves the accuracy of solutions on the interior of the domain. BOON’s correction method exhibits a 2-fold to 20-fold improvement over a given neural-operator model in relative L2 error.

Examples

Nonzero flux at an insulator on the boundary. The solution to the unconstrained Fourier-neural-operator (FNO) model for the heat equation has a nonzero flux at the left insulating boundary, which means that it allows heat to flow through an insulator. This is in direct contradiction to the physics-enforced boundary constraint. BOON, which satisfies this so-called Neumann boundary condition, ensures that the gradient is zero at the insulator. Similarly, at the right boundary, we see that the FNO solution has a negative gradient at a positive heat source and that the BOON solution corrects this nonphysical result. Guaranteeing no violation of the underlying physics is critical to the practical adoption of these deep-learning models by practitioners in the field.

Stokes’s second problem. This figure shows the velocity profile and corresponding absolute errors over time obtained by BOON (top). BOON improves the accuracy at the boundary, which, importantly, also improves accuracy on the interior of the domain compared to the unconstrained Fourier-neural-operator (FNO) model (bottom), where the errors at the boundary propagate inward over time.

2-D Navier-Stokes lid-driven cavity flow initial condition. The initial vorticity field (perpendicular to the screen), which is defined as the curl of the velocity field. At the initial time step, t = 0, the only nonzero component of the horizontal velocity is given by the top constant Dirichlet boundary condition, which drives the viscous incompressible flow at the later time steps. The other boundaries have the common no-slip Dirichlet boundary condition, which fixes the velocity to be zero at those locations.

Navier-Stokes lid-driven flow

2-D Navier-Stokes lid-driven cavity flow vorticity field. The vorticity field (perpendicular to the screen) within a square cavity filled with an incompressible fluid, which is induced by a fixed nonzero horizontal velocity prescribed by the Dirichlet boundary condition at the top boundary line for a 25-step (T=25) prediction until final time t = 2.

2-D Navier-Stokes lid-driven cavity flow relative error.

The L2 relative-error plots show significantly higher relative error over time for the data-driven Fourier neural operator (FNO) compared to that of our constrained BOON model on the Navier-Stokes lid-driven cavity flow problem for both a random test sample and the average over the test samples.

Acknowledgements: This work would have not been possible without the help of our coauthor Michael W. Mahoney, an Amazon Scholar; coauthors and PhD student interns Derek Hansen and Nadim Saad; and mentors Yuyang Wang and Margot Gerritsen.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Events & Conferences

An inside look at Meta’s transition from C to Rust on mobile

Published

on


Have you ever worked is legacy code? Are you curious what it takes to modernize systems at a massive scale?

Pascal Hartig is joined on the latest Meta Tech Podcast by Elaine and Buping, two software engineers working on a bold project to rewrite the decades-old C code in one of Meta’s core messaging libraries in Rust. It’s an ambitious effort that will transform a central messaging library that is shared across Messenger, Facebook, Instagram, and Meta’s AR/VR platforms.

They discuss taking on a project of this scope – even without a background in Rust, how they’re approaching it, and what it means to optimize for ‘developer happiness.’

Download or listen to the episode below:

You can also find the episode wherever you get your podcasts, including:

The Meta Tech Podcast is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features.

Send us feedback on InstagramThreads, or X.

And if you’re interested in learning more about career opportunities at Meta visit the Meta Careers page.





Source link

Continue Reading

Events & Conferences

Amazon Research Awards recipients announced

Published

on


Amazon Research Awards (ARA) provides unrestricted funds and AWS Promotional Credits to academic researchers investigating various research topics in multiple disciplines. This cycle, ARA received many excellent research proposals from across the world and today is publicly announcing 73 award recipients who represent 46 universities in 10 countries.

This announcement includes awards funded under five call for proposals during the fall 2024 cycle: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography, and Sustainability. Proposals were reviewed for the quality of their scientific content and their potential to impact both the research community and society. Additionally, Amazon encourages the publication of research results, presentations of research at Amazon offices worldwide, and the release of related code under open-source licenses.

Recipients have access to more than 700 Amazon public datasets and can utilize AWS AI/ML services and tools through their AWS Promotional Credits. Recipients also are assigned an Amazon research contact who offers consultation and advice, along with opportunities to participate in Amazon events and training sessions.

Recommended reads

In both black-box stress testing and red-team exercises, Nova Premier comes out on top.

“Automated Reasoning is an important area of research for Amazon, with potential applications across various features and applications to help improve security, reliability, and performance for our customers. Through the ARA program, we collaborate with leading academic researchers to explore challenges in this field,” said Robert Jones, senior principal scientist with the Cloud Automated Reasoning Group. “We were again impressed by the exceptional response to our Automated Reasoning call for proposals this year, receiving numerous high-quality submissions. Congratulations to the recipients! We’re excited to support their work and partner with them as they develop new science and technology in this important area.”

Recommended reads

IAM Access Analyzer feature uses automated reasoning to recommend policies that remove unused accesses, helping customers achieve “least privilege”.

“At Amazon, we believe that solving the world’s toughest sustainability challenges benefits from both breakthrough scientific research and open and bold collaboration. Through programs like the Amazon Research Awards program, we aim to support academic research that could contribute to our understanding of these complex issues,” said Kommy Weldemariam, Director of Science and Innovation Sustainability. “The selected proposals represent innovative projects that we hope will help advance knowledge in this field, potentially benefiting customers, communities, and the environment.”

ARA funds proposals throughout the year in a variety of research areas. Applicants are encouraged to visit the ARA call for proposals page for more information or send an email to be notified of future open calls.

The tables below list, in alphabetical order by last name, fall 2024 cycle call-for-proposal recipients, sorted by research area.

AI for Information Security

Recipient University Research title
Christopher Amato Northeastern University Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms
Bernd Bischl Ludwig Maximilian University of Munich Improving Generative and Foundation Models Reliability via Uncertainty-awareness
Shiqing Ma University Of Massachusetts Amherst LLM and Domain Adaptation for Attack Detection
Alina Oprea Northeastern University Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms
Roberto Perdisci University of Georgia ContextADBench: A Comprehensive Benchmark Suite for Contextual Anomaly Detection

Automated Reasoning

Recipient University Research title
Nada Amin Harvard University LLM-Augmented Semi-Automated Proofs for Interactive Verification
Suguman Bansal Georgia Institute of Technology Certified Inductive Generalization in Reinforcement Learning
Ioana Boureanu University of Surrey Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems
Omar Haider Chowdhury Stony Brook University Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege
Stefan Ciobaca Alexandru Ioan Cuza University An Interactive Proof Mode for Dafny
João Ferreira INESC-ID Polyglot Automated Program Repair for Infrastructure as Code
Sicun Gao University Of California, San Diego Monte Carlo Trees with Conflict Models for Proof Search
Mirco Giacobbe University of Birmingham Neural Software Verification
Tobias Grosser University of Cambridge Synthesis-based Symbolic BitVector Simplification for Lean
Ronghui Gu Columbia University Scaling Formal Verification of Security Properties for Unmodified System Software
Alexey Ignatiev Monash University Huub: Next-Gen Lazy Clause Generation
Kenneth McMillan University of Texas At Austin Synthesis of Auxiliary Variables and Invariants for Distributed Protocol Verification
Alexandra Mendes University of Porto Overcoming Barriers to the Adoption of Verification-Aware Languages
Jason Nieh Columbia University Scaling Formal Verification of Security Properties for Unmodified System Software
Rohan Padhye Carnegie Mellon University Automated Synthesis and Evaluation of Property-Based Tests
Nadia Polikarpova University Of California, San Diego Discovering and Proving Critical System Properties with LLMs
Fortunat Rajaona University of Surrey Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems
Subhajit Roy Indian Institute of Technology Kanpur Theorem Proving Modulo LLM
Gagandeep Singh University of Illinois At Urbana–Champaign Trustworthy LLM Systems using Formal Contracts
Scott Stoller Stony Brook University Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege
Peter Stuckey Monash University Huub: Next-Gen Lazy Clause Generation
Yulei Sui University of New South Wales Path-Sensitive Typestate Analysis through Sparse Abstract Execution
Nikos Vasilakis Brown University Semantics-Driven Static Analysis for the Unix/Linux Shell
Ping Wang Stevens Institute of Technology Leveraging Large Language Models for Reasoning Augmented Searching on Domain-specific NoSQL Database
John Wawrzynek University of California, Berkeley GPU-Accelerated High-Throughput SAT Sampling

AWS AI

Recipient University Research title
Panagiotis Adamopoulos Emory University Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations
Vikram Adve University of Illinois at Urbana–Champaign Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models
Frances Arnold California Institute of Technology Closed-loop Generative Machine Learning for De Novo Enzyme Discovery and Optimization
Yonatan Bisk Carnegie Mellon University Useful, Safe, and Robust Multiturn Interactions with LLMs
Shiyu Chang University of California, Santa Barbara Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing
Yuxin Chen University of Pennsylvania Provable Acceleration of Diffusion Models for Modern Generative AI
Tianlong Chen University of North Carolina at Chapel Hill Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing
Mingyu Ding University of North Carolina at Chapel Hill Aligning Long Videos and Language as Long-Horizon World Models
Nikhil Garg Cornell University Market Design for Responsible Multi-agent LLMs
Jessica Hullman Northwestern University Human-Aligned Uncertainty Quantification in High Dimensions
Christopher Jermaine Rice University Fast, Trusted AI Using the EINSUMMABLE Compiler
Yunzhu Li Columbia University Physics-Informed Foundation Models Through Embodied Interactions
Pattie Maes Massachusetts Institute of Technology Understanding How LLM Agents Deviate from Human Choices
Sasa Misailovic University of Illinois at Urbana–Champaign Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models
Kristina Monakhova Cornell University Trustworthy extreme imaging for science using interpretable uncertainty quantification
Todd Mowry Carnegie Mellon University Efficient LLM Serving on Trainium via Kernel Generation
Min-hwan Oh Seoul National University Mutually Beneficial Interplay Between Selection Fairness and Context Diversity in Contextual Bandits
Patrick Rebeschini University of Oxford Optimal Regularization for LLM Alignment
Jose Renau University of California, Santa Cruz Verification Constrained Hardware Optimization using Intelligent Design Agentic Programming
Vilma Todri Emory University Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations
Aravindan Vijayaraghavan Northwestern University Human-Aligned Uncertainty Quantification in High Dimensions
Wei Yang University of Texas at Dallas Optimizing RISC-V Compilers with RISC-LLM and Syntax Parsing
Huaxiu Yao University of North Carolina at Chapel Hill Aligning Long Videos and Language as Long-Horizon World Models
Amy Zhang University of Washington Tools for Governing AI Agent Autonomy
Ruqi Zhang Purdue University Efficient Test-time Alignment for Large Language Models and Large Multimodal Models
Zheng Zhang Rutgers University-New Brunswick AlphaQC: An AI-powered Quantum Circuit Optimizer and Denoiser

AWS Cryptography

Recipient University Research title
Alexandra Boldyreva Georgia Institute of Technology Quantifying Information Leakage in Searchable Encryption Protocols
Maria Eichlseder Graz University of Technology, Austria SALAD – Systematic Analysis of Lightweight Ascon-based Designs
Venkatesan Guruswami University of California, Berkeley Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing
Joseph Jaeger Georgia Institute of Technology Analyzing Chat Encryption for Group Messaging
Aayush Jain Carnegie Mellon Large Scale Multiparty Silent Preprocessing for MPC from LPN
Huijia Lin University of Washington Large Scale Multiparty Silent Preprocessing for MPC from LPN
Hamed Nemati KTH Royal Institute of Technology Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary
Karl Palmskog KTH Royal Institute of Technology Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary
Chris Peikert University of Michigan, Ann Arbor Practical Third-Generation FHE and Bootstrapping
Dimitrios Skarlatos Carnegie Mellon University Scale-Out FHE LLMs on GPUs
Vinod Vaikuntanathan Massachusetts Institute of Technology Can Quantum Computers (Really) Factor?
Daniel Wichs Northeastern University Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing
David Wu University Of Texas At Austin Fast Private Information Retrieval and More using Homomorphic Encryption

Sustainability

Recipient University Research title
Meeyoung Cha Max Planck Institute Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring
Jingrui He University of Illinois at Urbana–Champaign Foundation Model Enabled Earth’s Ecosystem Monitoring
Pedro Lopes University of Chicago AI-powered Tools that Enable Engineers to Make & Re-make Sustainable Hardware
Cheng Yaw Low Max Planck Institute Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring





Source link

Continue Reading

Events & Conferences

Independent evaluations demonstrate Nova Premier’s safety

Published

on


AI safety is a priority at Amazon. Our investment in safe, transparent, and responsible AI (RAI) includes collaboration with the global community and policymakers. We are members of and collaborate with organizations such as the Frontier Model Forum, the Partnership on AI, and other forums organized by government agencies such as the National Institute of Standards and Technology (NIST). Consistent with Amazon’s endorsement of the Korea Frontier AI Safety Commitments, we published our Frontier Model Safety Framework earlier this year.

Amazon Nova Premier’s guardrails help prevent generation of unsafe content.

During the development of the Nova Premier model, we conducted a comprehensive evaluation to assess its performance and safety. This included testing on both internal and public benchmarks and internal/automated and third-party red-teaming exercises. Once the final model was ready, we prioritized obtaining unbiased, third-party evaluations of the model’s robustness against RAI controls. In this post, we outline the key findings from these evaluations, demonstrating the strength of our testing approach and Amazon Premier’s standing as a safe model. Specifically, we cover our evaluations with two third-party evaluators: PRISM AI and ActiveFence.

Evaluation of Nova Premier against PRISM AI

PRISM Eval’s Behavior Elicitation Tool (BET) dynamically and systematically stress-tests AI models’ safety guardrails. The methodology focuses on measuring how many adversarial attempts (steps) it takes to get a model to generate harmful content across several key risk dimensions. The central metric is “steps to elicit” — the number of increasingly sophisticated prompting attempts required before a model generates an inappropriate response. A higher number of steps indicates stronger safety measures, as the model is more resistant to manipulation. The PRISM risk dimensions (inspired by the MLCommons AI Safety Benchmarks) include CBRNE weapons, violent crimes, non-violent crimes, defamation, and hate, amongst several others.

Related content

From reinforcement learning and supervised fine-tuning to guardrail models and image watermarking, responsible AI was foundational to the design and development of the Amazon Nova family of models.

Using the BET Eval tool and its V1.0 metric, which is tailored toward non-reasoning models, we compared the recently released Nova models (Pro and Premier) to the latest models in the same class: Claude (3.5 v2 and 3.7 non-reasoning) and Llama4 Maverick, all available through Amazon Bedrock. PRISM BET conducts black-box evaluations (where model developers don’t have access to the test prompts) of models integrated with their API. The evaluation conducted with BET Eval MAX, PRISM’s most comprehensive/aggressive testing suite, revealed significant variations in safety against malicious instructions. Nova models demonstrated superior overall safety performance, with an average of 43 steps for Premier and 52 steps for Pro, compared to 37.7 for Claude 3.5 v2 and fewer than 12 steps for other models in the comparison set (namely, 9.9 for Claude3.7, 11.5 for Claude 3.7 thinking, and 6.5 for Maverick). This higher step count suggests that on average, Nova’s safety guardrails are more sophisticated and harder to circumvent through adversarial prompting. The figure below presents the number of steps per harm category evaluated through BET Eval MAX.

Results of tests using PRISM’s BET Eval MAX testing suite.

The PRISM evaluation provides valuable insights into the relative safety of different Amazon Bedrock models. Nova’s strong performance, particularly in hate speech and defamation resistance, represents meaningful progress in AI safety. However, the results also highlight the ongoing challenge of building truly robust safety measures into AI systems. As the field continues to evolve, frameworks like BET will play an increasingly important role in benchmarking and improving AI safety. As a part of this collaboration Nicolas Miailhe, CEO of PRISM Eval, said, “It’s incredibly rewarding for us to see Nova outperforming strong baselines using the BET Eval MAX; our aim is to build a long-term partnership toward safer-by-design models and to make BET available to various model providers.” Organizations deploying AI systems should carefully consider these safety metrics when selecting models for their applications.

Manual red teaming with ActiveFence

The AI safety & security company ActiveFence benchmarked Nova Premier on Bedrock on prompts distributed across Amazon’s eight core RAI categories. ActiveFence also evaluated Claude 3.7 (non-reasoning mode) and GPT 4.1 API on the same set. The flag rate on Nova Premier was lower than that on the other two models, indicating that Nova Premier is the safest of the three.

Model 3P Flag Rate [↓ is better]
Nova Premier 12.0%
Sonnet 3.7 (non-reasoning) 20.6%
GPT4.1 API 22.4%

Related content

Generative AI raises new challenges in defining, measuring, and mitigating concerns about fairness, toxicity, and intellectual property, among other things. But work has started on the solutions.

“Our role is to think like an adversary but act in service of safety,” said Guy Paltieli from ActiveFence. “By conducting a blind stress test of Nova Premier under realistic threat scenarios, we helped evaluate its security posture in support of Amazon’s broader responsible-AI goals, ensuring the model could be deployed with greater confidence.”

These evaluations conducted with PRISM and ActiveFence give us confidence in the strength of our guardrails and our ability to protect our customers’ safety when they use our models. While these evaluations demonstrate strong safety performance, we recognize that AI safety is an ongoing challenge requiring continuous improvement. These assessments represent a point-in-time snapshot, and we remain committed to regular testing and enhancement of our safety measures. No AI system can guarantee perfect safety in all scenarios, which is why we maintain monitoring and response systems after deployment.

Acknowledgments: Vincent Ponzo, Elyssa Vincent





Source link

Continue Reading

Trending