Events & Conferences
Better-performing “25519” elliptic-curve cryptography – Amazon Science
Cryptographic algorithms are essential to online security, and at Amazon Web Services (AWS), we implement cryptographic algorithms in our open-source cryptographic library, AWS LibCrypto (AWS-LC), based on code from Google’s BoringSSL project. AWS-LC offers AWS customers implementations of cryptographic algorithms that are secure and optimized for AWS hardware.
Two cryptographic algorithms that have become increasingly popular are x25519 and Ed25519, both based on an elliptic curve known as curve25519. To improve the customer experience when using these algorithms, we recently took a deeper look at their implementations in AWS-LC. Henceforth, we use x/Ed25519 as shorthand for “x25519 and Ed25519”.
In 2023, AWS released multiple assembly-level implementations of x/Ed25519 in AWS-LC. By combining automated reasoning and state-of-the-art optimization techniques, these implementations improved performance over the existing AWS-LC implementations and also increased assurance of their correctness.
In particular, we prove functional correctness using automated reasoning and employ optimizations targeted to specific CPU microarchitectures for the instruction set architectures x86_64 and Arm64. We also do our best to execute the algorithms in constant time, to thwart side-channel attacks that infer secret information from the durations of computations.
In this post, we explore different aspects of our work, including the process for proving correctness via automated reasoning, microarchitecture (μarch) optimization techniques, the special considerations for constant-time code, and the quantification of performance gains.
Elliptic-curve cryptography
Elliptic-curve cryptography is a method for doing public-key cryptography, which uses a pair of keys, one public and one private. One of the best-known public-key cryptographic schemes is RSA, in which the public key is a very large integer, and the corresponding private key is prime factors of the integer. The RSA scheme can be used both to encrypt/decrypt data and also to sign/verify data. (Members of our team recently blogged on Amazon Science about how we used automated reasoning to make the RSA implementation on Amazon’s Graviton2 chips faster and easier to deploy.)
Elliptic curves offer an alternate way to mathematically relate public and private keys; sometimes, this means we can implement schemes more efficiently. While the mathematical theory of elliptic curves is both broad and deep, the elliptic curves used in cryptography are typically defined by an equation of the form y2 = x3 + ax2 + bx + c, where a, b, and c are constants. You can plot the points that satisfy the equation on a 2-D graph.
An elliptic curve has the property that a line that intersects it at two points intersects it at at most one other point. This property is used to define operations on the curve. For instance, the addition of two points on the curve can be defined not, indeed, as the third point on the curve collinear with the first two but as that third point’s reflection around the axis of symmetry.
Now, if the coordinates of points on the curve are taken modulo some integer, the curve becomes a scatter of points in the plane, but a scatter that still exhibits symmetry, so the addition operation remains well defined. Curve25519 is named after a large prime integer — specifically, 2255 – 19. The set of numbers modulo the curve25519 prime, together with basic arithmetic operations such as multiplication of two numbers modulo the same prime, define the field in which our elliptic-curve operations take place.
Successive execution of elliptic-curve additions is called scalar multiplication, where the scalar is the number of additions. With the elliptic curves used in cryptography, if you know only the result of the scalar multiplication, it is intractable to recover the scalar, if the scalar is sufficiently large. The result of the scalar multiplication becomes the basis of a public key, the original scalar the basis of a private key.
The x25519 and Ed25519 cryptographic algorithms
The x/Ed25519 algorithms have distinct purposes. The x25519 algorithm is a key agreement algorithm, used to securely establish a shared secret between two peers; Ed25519 is a digital-signature algorithm, used to sign and verify data.
The x/Ed25519 algorithms have been adopted in transport layer protocols such as TLS and SSH. In 2023, NIST announced an update to its FIPS185-6 Digital Signature Standard that included the addition of Ed25519. The x25519 algorithm also plays a role in post-quantum safe cryptographic solutions, having been included as the classical algorithm in the TLS 1.3 and SSH hybrid scheme specifications for post-quantum key agreement.
Microarchitecture optimizations
When we write assembly code for a specific CPU architecture, we use its instruction set architecture (ISA). The ISA defines resources such as the available assembly instructions, their semantics, and the CPU registers accessible to the programmer. Importantly, the ISA defines the CPU in abstract terms; it doesn’t specify how the CPU should be realized in hardware.
The detailed implementation of the CPU is called the microarchitecture, and every μarch has unique characteristics. For example, while the AWS Graviton 2 CPU and AWS Graviton 3 CPU are both based on the Arm64 ISA, their μarch implementations are different. We hypothesized that if we could take advantage of the μarch differences, we could create x/Ed25519 implementations that were even faster than the existing implementations in AWS-LC. It turns out that this intuition was correct.
Let us look closer at how we took advantage of μarch differences. Different arithmetic operations can be defined on curve25519, and different combinations of those operations are used to construct the x/Ed25519 algorithms. Logically, the necessary arithmetic operations can be considered at three levels:
- Field operations: Operations within the field defined by the curve25519 prime 2255 – 19.
- Elliptic-curve group operations: Operations that apply to elements of the curve itself, such as the addition of two points, P1 and P2.
- Top-level operations: Operations implemented by iterative application of elliptic-curve group operations, such as scalar multiplication.
Each level has its own avenues for optimization. We focused our μarch-dependent optimizations on the level-one operations, while for levels two and three our implementations employ known state-of-the-art techniques and are largely the same for different μarchs. Below, we give a summary of the different μarch-dependent choices we made in our implementations of x/Ed25519.
- For modern x86_64 μarchs, we use the instructions MULX, ADCX, and ADOX, which are variations of the standard assembly instructions MUL (multiply) and ADC (add with carry) found in the instruction set extensions commonly called BMI and ADX. These instructions are special because, when used in combination, they can maintain two carry chains in parallel, which has been observed to boost performance up to 30%. For older x86_64 μarchs that don’t support the instruction set extensions, we use more traditional single-carry chains.
- For Arm64 μarchs, such as AWS Graviton 3 with improved integer multipliers, we use relatively straightforward schoolbook multiplication, which turns out to give good performance. AWS Graviton 2 has smaller multipliers. For this Arm64 μarch, we use subtractive forms of Karatsuba multiplication, which breaks down multiplications recursively. The reason is that, on these μarchs, 64×64-bit multiplication producing a 128-bit result has substantially lower throughput relative to other operations, making the number size at which Karatsuba optimization becomes worthwhile much smaller.
We also optimized level-one operations that are the same for all μarchs. One example concerns the use of the binary greatest-common-divisor (GCD) algorithm to compute modular inverses. We use the “divstep” form of binary GCD, which lends itself to efficient implementation, but it also complicates the second goal we had: formally proving correctness.
Binary GCD is an iterative algorithm with two arguments, whose initial values are the numbers whose greatest common divisor we seek. The arguments are successively reduced in a well-defined way, until the value of one of them reaches zero. With two n-bit numbers, the standard implementation of the algorithm removes at least one bit total per iteration, so 2n iterations suffice.
With divstep, however, determining the number of iterations needed to get down to the base case seems analytically difficult. The most tractable proof of the bound uses an elaborate inductive argument based on an intricate “stable hull” provably overapproximating the region in two-dimensional space containing the points corresponding to the argument values. Daniel Bernstein, one of the inventors of x25519 and Ed25519, proved the formal correctness of the bound using HOL Light, a proof assistant that one of us (John) created. (For more on HOL Light, see, again, our earlier RSA post.)
Performance results
In this section, we will highlight improvements in performance. For the sake of simplicity, we focus on only three μarchs: AWS Graviton 3, AWS Graviton 2, and Intel Ice Lake. To gather performance data, we used EC2 instances with matching CPU μarchs — c6g.4xlarge, c7g.4xlarge, and c6i.4xlarge, respectively; to measure each algorithm, we used the AWS-LC speed tool.
In the graphs below, all units are operations per second (ops/sec). The “before” columns represent the performance of the existing x/Ed25519 implementations in AWS-LC. The “after” columns represent the performance of the new implementations.
We observed the biggest improvement for the x25519 algorithm. Note that an x25519 operation in the graph below includes the two major operations needed for an x25519 key exchange agreement: base-point multiplication and variable-point multiplication.
On average, over the AWS Graviton 2, AWS Graviton 3, and Intel Ice Lake microarchitectures, we saw an 86% improvement in performance.
Proving correctness
We develop the core parts of the x/Ed25519 implementations in AWS-LC in s2n-bignum, an AWS-owned library of integer arithmetic routines designed for cryptographic applications. The s2n-bignum library is also where we prove the functional correctness of the implementations using HOL Light. HOL Light is an interactive theorem prover for higher-order logic (hence HOL), and it is designed to have a particularly simple (hence light) “correct by construction” approach to proof. This simplicity offers assurance that anything “proved” has really been proved rigorously and is not the artifact of a prover bug.
We follow the same principle of simplicity when we write our implementations in assembly. Writing in assembly is more challenging, but it offers a distinct advantage when proving correctness: our proofs become independent of any compiler.
The diagram below shows the process we use to prove x/Ed25519 correct. The process requires two different sets of inputs: first is the algorithm implementation we’re evaluating; second is a proof script that models both the correct mathematical behavior of the algorithm and the behavior of the CPU. The proof is a sequence of functions specific to HOL Light that represent proof strategies and the order in which they should be applied. Writing the proof is not automated and requires developer ingenuity.
From the algorithm implementation and the proof script, HOL Light either determines that the implementation is correct or, if unable to do so, fails. HOL Light views the algorithm implementation as a sequence of machine code bytes. Using the supplied specification of CPU instructions and the developer-written strategies in the proof script, HOL Light reasons about the correctness of the execution.
This part of the correctness proof is automated, and we even implement it inside s2n-bignum’s continuous-integration (CI) workflow. The workflow covered in the CI is highlighted by the red dotted line in the diagram below. CI integration provides assurance that no changes to the algorithm implementation code can be committed to s2n-bignum’s code repository without successfully passing a formal proof of correctness.
The CPU instruction specification is one of the most critical ingredients in our correctness proofs. For the proofs to be true in practice, the specification must capture the real-world semantics of each instruction. To improve assurance on this point, we apply randomized testing against the instruction specifications on real hardware, “fuzzing out” inaccuracies.
Constant time
We designed our implementations and optimizations with security as priority number one. Cryptographic code must strive to be free of side channels that could allow an unauthorized user to extract private information. For example, if the execution time of cryptographic code depends on secret values, then it might be possible to infer those values from execution times. Similarly, if CPU cache behavior depends on secret values, an unauthorized user who shares the cache could infer those values.
Our implementations of x/Ed25519 are designed with constant time in mind. They perform exactly the same sequence of basic CPU instructions regardless of the input values, and they avoid any CPU instructions that might have data-dependent timing.
Using x/Ed25519 optimizations in applications
AWS uses AWS-LC extensively to power cryptographic operations in a diverse set of AWS service subsystems. You can take advantage of the x/Ed25519 optimizations presented in this blog by using AWS-LC in your application(s). Visit AWS-LC on Github to learn more about how you can integrate AWS-LC into your application.
To allow easier integration for developers, AWS has created bindings from AWS-LC to multiple programming languages. These bindings expose cryptographic functionality from AWS-LC through well-defined APIs, removing the need to reimplement cryptographic algorithms in higher-level programming languages. At present, AWS has open-sourced bindings for Java and Rust — the Amazon Corretto Cryptographic Provider (ACCP) for Java, and AWS-LC for Rust (aws-lc-rs). Furthermore, we have contributed patches allowing CPython to build against AWS-LC and use it for all cryptography in the Python standard library. Below we highlight some of the open-source projects that are already using AWS-LC to meet their cryptographic needs.
We are not done yet. We continue our efforts to improve x/Ed25519 performance as well as pursuing optimizations for other cryptographic algorithms supported by s2n-bignum and AWS-LC. Follow the s2n-bignum and AWS-LC repositories for updates.
Events & Conferences
An inside look at Meta’s transition from C to Rust on mobile
Have you ever worked is legacy code? Are you curious what it takes to modernize systems at a massive scale?
Pascal Hartig is joined on the latest Meta Tech Podcast by Elaine and Buping, two software engineers working on a bold project to rewrite the decades-old C code in one of Meta’s core messaging libraries in Rust. It’s an ambitious effort that will transform a central messaging library that is shared across Messenger, Facebook, Instagram, and Meta’s AR/VR platforms.
They discuss taking on a project of this scope – even without a background in Rust, how they’re approaching it, and what it means to optimize for ‘developer happiness.’
Download or listen to the episode below:
You can also find the episode wherever you get your podcasts, including:
The Meta Tech Podcast is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features.
Send us feedback on Instagram, Threads, or X.
And if you’re interested in learning more about career opportunities at Meta visit the Meta Careers page.
Events & Conferences
Amazon Research Awards recipients announced
Amazon Research Awards (ARA) provides unrestricted funds and AWS Promotional Credits to academic researchers investigating various research topics in multiple disciplines. This cycle, ARA received many excellent research proposals from across the world and today is publicly announcing 73 award recipients who represent 46 universities in 10 countries.
This announcement includes awards funded under five call for proposals during the fall 2024 cycle: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography, and Sustainability. Proposals were reviewed for the quality of their scientific content and their potential to impact both the research community and society. Additionally, Amazon encourages the publication of research results, presentations of research at Amazon offices worldwide, and the release of related code under open-source licenses.
Recipients have access to more than 700 Amazon public datasets and can utilize AWS AI/ML services and tools through their AWS Promotional Credits. Recipients also are assigned an Amazon research contact who offers consultation and advice, along with opportunities to participate in Amazon events and training sessions.
“Automated Reasoning is an important area of research for Amazon, with potential applications across various features and applications to help improve security, reliability, and performance for our customers. Through the ARA program, we collaborate with leading academic researchers to explore challenges in this field,” said Robert Jones, senior principal scientist with the Cloud Automated Reasoning Group. “We were again impressed by the exceptional response to our Automated Reasoning call for proposals this year, receiving numerous high-quality submissions. Congratulations to the recipients! We’re excited to support their work and partner with them as they develop new science and technology in this important area.”
“At Amazon, we believe that solving the world’s toughest sustainability challenges benefits from both breakthrough scientific research and open and bold collaboration. Through programs like the Amazon Research Awards program, we aim to support academic research that could contribute to our understanding of these complex issues,” said Kommy Weldemariam, Director of Science and Innovation Sustainability. “The selected proposals represent innovative projects that we hope will help advance knowledge in this field, potentially benefiting customers, communities, and the environment.”
ARA funds proposals throughout the year in a variety of research areas. Applicants are encouraged to visit the ARA call for proposals page for more information or send an email to be notified of future open calls.
The tables below list, in alphabetical order by last name, fall 2024 cycle call-for-proposal recipients, sorted by research area.
AI for Information Security
Recipient | University | Research title |
Christopher Amato | Northeastern University | Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms |
Bernd Bischl | Ludwig Maximilian University of Munich | Improving Generative and Foundation Models Reliability via Uncertainty-awareness |
Shiqing Ma | University Of Massachusetts Amherst | LLM and Domain Adaptation for Attack Detection |
Alina Oprea | Northeastern University | Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms |
Roberto Perdisci | University of Georgia | ContextADBench: A Comprehensive Benchmark Suite for Contextual Anomaly Detection |
Automated Reasoning
Recipient | University | Research title |
Nada Amin | Harvard University | LLM-Augmented Semi-Automated Proofs for Interactive Verification |
Suguman Bansal | Georgia Institute of Technology | Certified Inductive Generalization in Reinforcement Learning |
Ioana Boureanu | University of Surrey | Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems |
Omar Haider Chowdhury | Stony Brook University | Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege |
Stefan Ciobaca | Alexandru Ioan Cuza University | An Interactive Proof Mode for Dafny |
João Ferreira | INESC-ID | Polyglot Automated Program Repair for Infrastructure as Code |
Sicun Gao | University Of California, San Diego | Monte Carlo Trees with Conflict Models for Proof Search |
Mirco Giacobbe | University of Birmingham | Neural Software Verification |
Tobias Grosser | University of Cambridge | Synthesis-based Symbolic BitVector Simplification for Lean |
Ronghui Gu | Columbia University | Scaling Formal Verification of Security Properties for Unmodified System Software |
Alexey Ignatiev | Monash University | Huub: Next-Gen Lazy Clause Generation |
Kenneth McMillan | University of Texas At Austin | Synthesis of Auxiliary Variables and Invariants for Distributed Protocol Verification |
Alexandra Mendes | University of Porto | Overcoming Barriers to the Adoption of Verification-Aware Languages |
Jason Nieh | Columbia University | Scaling Formal Verification of Security Properties for Unmodified System Software |
Rohan Padhye | Carnegie Mellon University | Automated Synthesis and Evaluation of Property-Based Tests |
Nadia Polikarpova | University Of California, San Diego | Discovering and Proving Critical System Properties with LLMs |
Fortunat Rajaona | University of Surrey | Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems |
Subhajit Roy | Indian Institute of Technology Kanpur | Theorem Proving Modulo LLM |
Gagandeep Singh | University of Illinois At Urbana–Champaign | Trustworthy LLM Systems using Formal Contracts |
Scott Stoller | Stony Brook University | Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege |
Peter Stuckey | Monash University | Huub: Next-Gen Lazy Clause Generation |
Yulei Sui | University of New South Wales | Path-Sensitive Typestate Analysis through Sparse Abstract Execution |
Nikos Vasilakis | Brown University | Semantics-Driven Static Analysis for the Unix/Linux Shell |
Ping Wang | Stevens Institute of Technology | Leveraging Large Language Models for Reasoning Augmented Searching on Domain-specific NoSQL Database |
John Wawrzynek | University of California, Berkeley | GPU-Accelerated High-Throughput SAT Sampling |
AWS AI
Recipient | University | Research title |
Panagiotis Adamopoulos | Emory University | Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations |
Vikram Adve | University of Illinois at Urbana–Champaign | Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models |
Frances Arnold | California Institute of Technology | Closed-loop Generative Machine Learning for De Novo Enzyme Discovery and Optimization |
Yonatan Bisk | Carnegie Mellon University | Useful, Safe, and Robust Multiturn Interactions with LLMs |
Shiyu Chang | University of California, Santa Barbara | Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing |
Yuxin Chen | University of Pennsylvania | Provable Acceleration of Diffusion Models for Modern Generative AI |
Tianlong Chen | University of North Carolina at Chapel Hill | Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing |
Mingyu Ding | University of North Carolina at Chapel Hill | Aligning Long Videos and Language as Long-Horizon World Models |
Nikhil Garg | Cornell University | Market Design for Responsible Multi-agent LLMs |
Jessica Hullman | Northwestern University | Human-Aligned Uncertainty Quantification in High Dimensions |
Christopher Jermaine | Rice University | Fast, Trusted AI Using the EINSUMMABLE Compiler |
Yunzhu Li | Columbia University | Physics-Informed Foundation Models Through Embodied Interactions |
Pattie Maes | Massachusetts Institute of Technology | Understanding How LLM Agents Deviate from Human Choices |
Sasa Misailovic | University of Illinois at Urbana–Champaign | Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models |
Kristina Monakhova | Cornell University | Trustworthy extreme imaging for science using interpretable uncertainty quantification |
Todd Mowry | Carnegie Mellon University | Efficient LLM Serving on Trainium via Kernel Generation |
Min-hwan Oh | Seoul National University | Mutually Beneficial Interplay Between Selection Fairness and Context Diversity in Contextual Bandits |
Patrick Rebeschini | University of Oxford | Optimal Regularization for LLM Alignment |
Jose Renau | University of California, Santa Cruz | Verification Constrained Hardware Optimization using Intelligent Design Agentic Programming |
Vilma Todri | Emory University | Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations |
Aravindan Vijayaraghavan | Northwestern University | Human-Aligned Uncertainty Quantification in High Dimensions |
Wei Yang | University of Texas at Dallas | Optimizing RISC-V Compilers with RISC-LLM and Syntax Parsing |
Huaxiu Yao | University of North Carolina at Chapel Hill | Aligning Long Videos and Language as Long-Horizon World Models |
Amy Zhang | University of Washington | Tools for Governing AI Agent Autonomy |
Ruqi Zhang | Purdue University | Efficient Test-time Alignment for Large Language Models and Large Multimodal Models |
Zheng Zhang | Rutgers University-New Brunswick | AlphaQC: An AI-powered Quantum Circuit Optimizer and Denoiser |
AWS Cryptography
Recipient | University | Research title |
Alexandra Boldyreva | Georgia Institute of Technology | Quantifying Information Leakage in Searchable Encryption Protocols |
Maria Eichlseder | Graz University of Technology, Austria | SALAD – Systematic Analysis of Lightweight Ascon-based Designs |
Venkatesan Guruswami | University of California, Berkeley | Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing |
Joseph Jaeger | Georgia Institute of Technology | Analyzing Chat Encryption for Group Messaging |
Aayush Jain | Carnegie Mellon | Large Scale Multiparty Silent Preprocessing for MPC from LPN |
Huijia Lin | University of Washington | Large Scale Multiparty Silent Preprocessing for MPC from LPN |
Hamed Nemati | KTH Royal Institute of Technology | Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary |
Karl Palmskog | KTH Royal Institute of Technology | Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary |
Chris Peikert | University of Michigan, Ann Arbor | Practical Third-Generation FHE and Bootstrapping |
Dimitrios Skarlatos | Carnegie Mellon University | Scale-Out FHE LLMs on GPUs |
Vinod Vaikuntanathan | Massachusetts Institute of Technology | Can Quantum Computers (Really) Factor? |
Daniel Wichs | Northeastern University | Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing |
David Wu | University Of Texas At Austin | Fast Private Information Retrieval and More using Homomorphic Encryption |
Sustainability
Recipient | University | Research title |
Meeyoung Cha | Max Planck Institute | Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring |
Jingrui He | University of Illinois at Urbana–Champaign | Foundation Model Enabled Earth’s Ecosystem Monitoring |
Pedro Lopes | University of Chicago | AI-powered Tools that Enable Engineers to Make & Re-make Sustainable Hardware |
Cheng Yaw Low | Max Planck Institute | Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring |
Events & Conferences
Independent evaluations demonstrate Nova Premier’s safety
AI safety is a priority at Amazon. Our investment in safe, transparent, and responsible AI (RAI) includes collaboration with the global community and policymakers. We are members of and collaborate with organizations such as the Frontier Model Forum, the Partnership on AI, and other forums organized by government agencies such as the National Institute of Standards and Technology (NIST). Consistent with Amazon’s endorsement of the Korea Frontier AI Safety Commitments, we published our Frontier Model Safety Framework earlier this year.
During the development of the Nova Premier model, we conducted a comprehensive evaluation to assess its performance and safety. This included testing on both internal and public benchmarks and internal/automated and third-party red-teaming exercises. Once the final model was ready, we prioritized obtaining unbiased, third-party evaluations of the model’s robustness against RAI controls. In this post, we outline the key findings from these evaluations, demonstrating the strength of our testing approach and Amazon Premier’s standing as a safe model. Specifically, we cover our evaluations with two third-party evaluators: PRISM AI and ActiveFence.
Evaluation of Nova Premier against PRISM AI
PRISM Eval’s Behavior Elicitation Tool (BET) dynamically and systematically stress-tests AI models’ safety guardrails. The methodology focuses on measuring how many adversarial attempts (steps) it takes to get a model to generate harmful content across several key risk dimensions. The central metric is “steps to elicit” — the number of increasingly sophisticated prompting attempts required before a model generates an inappropriate response. A higher number of steps indicates stronger safety measures, as the model is more resistant to manipulation. The PRISM risk dimensions (inspired by the MLCommons AI Safety Benchmarks) include CBRNE weapons, violent crimes, non-violent crimes, defamation, and hate, amongst several others.
Using the BET Eval tool and its V1.0 metric, which is tailored toward non-reasoning models, we compared the recently released Nova models (Pro and Premier) to the latest models in the same class: Claude (3.5 v2 and 3.7 non-reasoning) and Llama4 Maverick, all available through Amazon Bedrock. PRISM BET conducts black-box evaluations (where model developers don’t have access to the test prompts) of models integrated with their API. The evaluation conducted with BET Eval MAX, PRISM’s most comprehensive/aggressive testing suite, revealed significant variations in safety against malicious instructions. Nova models demonstrated superior overall safety performance, with an average of 43 steps for Premier and 52 steps for Pro, compared to 37.7 for Claude 3.5 v2 and fewer than 12 steps for other models in the comparison set (namely, 9.9 for Claude3.7, 11.5 for Claude 3.7 thinking, and 6.5 for Maverick). This higher step count suggests that on average, Nova’s safety guardrails are more sophisticated and harder to circumvent through adversarial prompting. The figure below presents the number of steps per harm category evaluated through BET Eval MAX.
The PRISM evaluation provides valuable insights into the relative safety of different Amazon Bedrock models. Nova’s strong performance, particularly in hate speech and defamation resistance, represents meaningful progress in AI safety. However, the results also highlight the ongoing challenge of building truly robust safety measures into AI systems. As the field continues to evolve, frameworks like BET will play an increasingly important role in benchmarking and improving AI safety. As a part of this collaboration Nicolas Miailhe, CEO of PRISM Eval, said, “It’s incredibly rewarding for us to see Nova outperforming strong baselines using the BET Eval MAX; our aim is to build a long-term partnership toward safer-by-design models and to make BET available to various model providers.” Organizations deploying AI systems should carefully consider these safety metrics when selecting models for their applications.
Manual red teaming with ActiveFence
The AI safety & security company ActiveFence benchmarked Nova Premier on Bedrock on prompts distributed across Amazon’s eight core RAI categories. ActiveFence also evaluated Claude 3.7 (non-reasoning mode) and GPT 4.1 API on the same set. The flag rate on Nova Premier was lower than that on the other two models, indicating that Nova Premier is the safest of the three.
Model | 3P Flag Rate [↓ is better] |
Nova Premier | 12.0% |
Sonnet 3.7 (non-reasoning) | 20.6% |
GPT4.1 API | 22.4% |
“Our role is to think like an adversary but act in service of safety,” said Guy Paltieli from ActiveFence. “By conducting a blind stress test of Nova Premier under realistic threat scenarios, we helped evaluate its security posture in support of Amazon’s broader responsible-AI goals, ensuring the model could be deployed with greater confidence.”
These evaluations conducted with PRISM and ActiveFence give us confidence in the strength of our guardrails and our ability to protect our customers’ safety when they use our models. While these evaluations demonstrate strong safety performance, we recognize that AI safety is an ongoing challenge requiring continuous improvement. These assessments represent a point-in-time snapshot, and we remain committed to regular testing and enhancement of our safety measures. No AI system can guarantee perfect safety in all scenarios, which is why we maintain monitoring and response systems after deployment.
Acknowledgments: Vincent Ponzo, Elyssa Vincent
-
Funding & Business6 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers6 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions6 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Funding & Business7 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers6 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers4 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit
-
Tools & Platforms6 days ago
Winning with AI – A Playbook for Pest Control Business Leaders to Drive Growth