Connect with us

Events & Conferences

How Project P.I. helps Amazon remove imperfect products

Published

on


Although there are hundreds of millions of products stored in Amazon fulfillment centers, it’s very rare for customers to report shipped products as damaged. However, Amazon’s culture of customer obsession means that teams are actively working to find and remove even that relatively small number of imperfect products before they’re delivered to customers.

Related content

Using causal random forests and Bayesian structural time series to extrapolate from sparse data ensures that customers get the most useful information as soon as possible.

One of those teams includes scientists who are using generative AI and computer vision, powered by AWS services such as Amazon Bedrock and Amazon SageMaker, to help spot, isolate, and remove imperfect items.

Inside Amazon fulfillment centers across North America, products ranging from dog food and phone cases to T-shirts and books pass through imaging tunnels for a wide variety of uses, including sorting products based on their intended destination. Those use cases have been extended to include the use of artificial intelligence to inspect individual items for defects.

For example, optical character recognition (OCR) — the process that converts an image of text into a machine-readable text format — checks expiration dates on product packaging to ensure expired items are not sent to customers. Computer vision (CV) models — trained with reference images from the product catalog and actual images of products sent to customers — pore over color and monochrome images for signs of product damage such as bent book covers.

Additionally, a recent breakthrough solution leverages the ability of generative AI to process multimodal information by synthesizing evidence from images captured during the Amazon fulfillment process and combining it with written customer feedback to trigger even faster corrective actions.

This effort, referred to collectively as Project P.I., which stands for “private investigator”, encompasses the team’s vision of using a detective-like toolset to uncover both defects and, wherever possible, their cause — to address the issue at its root before a product reaches the customer.

“We want to equip ourselves with the most powerful, scalable tools and levers to help us protect our customers’ trust,” said Pingping Shan, director of perfect order experience at Amazon.

Defect detection

Project P.I. is an outgrowth of Amazon’s product quality program, and the tools and systems developed by the team’s scientists include machine learning models that assist selling partners with listing products with accurate information.

“The product quality team is constantly looking for ways to both reduce the burden on the sellers and to proactively verify the condition of inventory in fulfillment centers,” Shan said.

An early solution was an OCR model that checks the labeling information when inventory arrives and compares that to the information in Amazon’s database. If a mismatches occurs — such as a pallet of dog food with an earlier sell-by date than the date in the database — the team can isolate and inspect the pallet and prevent any expired products from reaching the customer.

When an item-level defect is detected, Amazon takes several steps to resolve the issue, including investigating whether the item is one in a defective batch and, if so, isolating the batch from the rest of the items, explained Angela Ke, a senior product manager.

“We want to make sure that customers don’t have to experience issues with product quality. That’s really the vision of Project P.I.,” she said. “We want to get it right for customers the first time, so we want to inspect the products before they leave our fulfillment center, and we incorporate AI to streamline the workflow.”

Customer feedback aids model training

Despite the team’s best efforts, sometimes product quality issues only become known after an item has been delivered to customers, noted Mark Ma, a principal product manager. Those arise in cases where customers have filed a return noting the issue. In those instances, the team tracks down the batch the product came from, verifies the issue, removes those items from fulfillment center shelves, issues refunds, and communicates the issue to the seller.

“We know that that correcting the defects after they happen is not the best way to protect and improve the customer experience. That’s why we started exploring what kind of data we can gather further upstream,” he said. Those discussions eventually led to leveraging the tunnel images to better identify products with defects and take surgical and proactive action to address them — before they’re packaged and shipped.

Related content

DocFormerV2 makes sense of documents using local features, outperforming much bigger models.

One of the early challenges with that approach entailed training CV models to correctly identify defects, noted Vincent Gao, a senior science manager on the product quality team.

“It’s like finding a needle in a haystack,” he said. “We needed a model that could accurately identify those among all the other normal products. Otherwise, we could be finding a lot of false positives making the fulfillment process inefficient.”

Gao’s team turned to an ensemble approach that combines self-supervised models with supervised transformer models —a neural-network architecture that uses attention mechanisms to improve performance on machine learning tasks — to spot the difference between normal and defective items. By learning what the “correct” product looks like from fulfillment center images associated with normal orders, the model can compare an item on its way to be packaged against its “normal” image and provide a measurement of how much it differs.

This approach allowed the team to more reliably spot obvious product defects, such as a book with a torn cover or an empty canister of tennis balls, yet it still couldn’t account for some of the fine grain details like a mislabeled T-shirt size or bent box.

To achieve that, the team turned to customer feedback to help train a variety of ML models that can spot the difference between normal and defective items. This more detailed, labeled data was used to refine the model to detect the types of defects customers notice.

“Using that, we are able to be more targeted on the areas that we want to identify so that we can enable the models to learn more on those finer details,” Gao said.

Leveraging generative AI

Today, the science team is leveraging breakthroughs in generative AI to make product defect detection more scalable and robust. For example, the team launched a multimodal large language model (MLLM) that’s been trained to identify damage such as broken seals, torn boxes, and bent book covers, and report in plain language the damage it detects.

The LLM is working side-by-side with the visual language model to analyze data from different sources and modalities to help us make a decision.

“We use the MLLM to ingest and understand the images from fulfillment centers to identify damage patters with zero-shot learning capability — meaning the model can recognize something it has not seen in training. That is a significant plus when it comes to identifying damage patterns given their vast variation,” Ma explained. “Then we use the model to summarize common damage patterns, which enable us to work more upstream with our selling partners and manufactures to proactively address these issues.”

With traditional CV technologies, a model would be trained for each damage scenario – broken seal, torn box, etc. – Gao said, resulting in an unscalable ensemble of dozens to hundreds of models. The MLLM, on the other hand, is a single and scalable unified solution.

“That’s the new power we now have on top of the classic computer vision,” Shan said.

The Project P.I. team has also recently put into production a generative AI system that uses an MLLM to investigate the root cause of negative customer experiences. The system first reviews customer feedback about the issue and then analyzes product images collected by the tunnels and other data sources to confirm the root cause.

Related content

Novel architectures and carefully prepared training data enable state-of-the-art performance.

For example, if a customer contacts Amazon because they ordered twin-size sheets but received king-size, the generative AI system cross-references that feedback with fulfillment center images. The system will ask questions such as, “Is the product label visible in the image?” “Does the label read king or twin?”

The system’s vision-language model in turn looks at the images, extracts the text from the label, and answers the questions. The LLM converts the answers into a plainspoken summary of the investigation.

“The LLM is working side-by-side with the visual language model to analyze data from different sources and modalities to help us make a decision,” said Gao. “We can actually have the LLM trigger the vision-language model to finish all the different verification tasks.”

Proof of concept in the fulfillment center

Since May 2022, the product quality team has been rolling out their item-level product defect detection solutions using imaging tunnels at several fulfillment centers in North America.

The results have been promising. The system has proven itself adept at sorting through the millions of items that pass through the tunnels each month and accurately identifying both expired items and issues such as wrong color or size.

Related content

First model to work across a wide range of products uses a second U-Net encoder to capture fine-grained product details.

In the future, the team aims to implement near real-time product defect detection with local image processing. In this scenario, defective items could be pulled off the conveyor belt and a replacement item automatically ordered, thus eliminating disruptions to the fulfillment process.

“Ultimately, we want to be behind the scenes. We don’t need our customers to know this is going on,” said Keiko Akashi, a senior manager of product management at Amazon. “The customer should be getting a perfect order and not even know that the expired or damaged item existed.”

Sidelining defective items will also result in fewer returns, which has an added sustainability benefit, noted Gao.

“We want to intercept the wrong items or defective items,” he said. “That translates to less back and forth shipping overhead, while also delivering a better customer experience.”

New avenues for investigation

Seamless integration of these solutions across the Amazon fulfillment center network will require refinements to the AI models such as the ability to parse a potential misperception of a defect from an actual defect. For example, a “manufactured on” date might be conflated with an “expiration” date or sneakers that arrive without a shoebox are the wrong item instead of a step to reduce packaging, noted Ke.

Related content

Amazon teams up with RTI International, Schlumberger, and International Paper on a project selected by the US Department of Energy to scale carbon capture and storage for the pulp and paper industry.

What’s more, there are challenges adapting CV models to the unique nuances of each fulfillment center and region, such as the size and color of the totes used to convey items around fulfillment centers, and the ability to extract data across a multitude of languages.

“There’s a lot of information that’s written in words,” Ke explained. “So how do we make sure that the model is picking up the right language and translating it correctly? That’s another challenge our science team is trying to solve.”

As the team has gone down this road, they’ve amassed data that shows the defects sometimes are the result of what happens outside of Amazon’s fulfillment centers.

“It could have been a carrier issue,” noted Akashi. “When customers say, ‘Hey, it came damaged,’ we can look into our outbound images and see that nothing has gone wrong. Then we can go figure out what else is going on.”

The team also plans to make data on defects more easily accessible to selling partners, Akashi added. For example, if Amazon discovered a seller accidentally put stickers with the wrong size on a product, Amazon would communicate the issue to help prevent the error from happening again.

“There’s an opportunity to get this information in front of our selling partners so they have visibility to their own inventory, and they can also have more succinct root causes to why these returns are happening,” she explained. “We’re excited that the data that we’re gathering and the AI models we are creating will benefit our customers and selling partners.”





Source link

Events & Conferences

An inside look at Meta’s transition from C to Rust on mobile

Published

on


Have you ever worked is legacy code? Are you curious what it takes to modernize systems at a massive scale?

Pascal Hartig is joined on the latest Meta Tech Podcast by Elaine and Buping, two software engineers working on a bold project to rewrite the decades-old C code in one of Meta’s core messaging libraries in Rust. It’s an ambitious effort that will transform a central messaging library that is shared across Messenger, Facebook, Instagram, and Meta’s AR/VR platforms.

They discuss taking on a project of this scope – even without a background in Rust, how they’re approaching it, and what it means to optimize for ‘developer happiness.’

Download or listen to the episode below:

You can also find the episode wherever you get your podcasts, including:

The Meta Tech Podcast is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features.

Send us feedback on InstagramThreads, or X.

And if you’re interested in learning more about career opportunities at Meta visit the Meta Careers page.





Source link

Continue Reading

Events & Conferences

Amazon Research Awards recipients announced

Published

on


Amazon Research Awards (ARA) provides unrestricted funds and AWS Promotional Credits to academic researchers investigating various research topics in multiple disciplines. This cycle, ARA received many excellent research proposals from across the world and today is publicly announcing 73 award recipients who represent 46 universities in 10 countries.

This announcement includes awards funded under five call for proposals during the fall 2024 cycle: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography, and Sustainability. Proposals were reviewed for the quality of their scientific content and their potential to impact both the research community and society. Additionally, Amazon encourages the publication of research results, presentations of research at Amazon offices worldwide, and the release of related code under open-source licenses.

Recipients have access to more than 700 Amazon public datasets and can utilize AWS AI/ML services and tools through their AWS Promotional Credits. Recipients also are assigned an Amazon research contact who offers consultation and advice, along with opportunities to participate in Amazon events and training sessions.

Recommended reads

In both black-box stress testing and red-team exercises, Nova Premier comes out on top.

“Automated Reasoning is an important area of research for Amazon, with potential applications across various features and applications to help improve security, reliability, and performance for our customers. Through the ARA program, we collaborate with leading academic researchers to explore challenges in this field,” said Robert Jones, senior principal scientist with the Cloud Automated Reasoning Group. “We were again impressed by the exceptional response to our Automated Reasoning call for proposals this year, receiving numerous high-quality submissions. Congratulations to the recipients! We’re excited to support their work and partner with them as they develop new science and technology in this important area.”

Recommended reads

IAM Access Analyzer feature uses automated reasoning to recommend policies that remove unused accesses, helping customers achieve “least privilege”.

“At Amazon, we believe that solving the world’s toughest sustainability challenges benefits from both breakthrough scientific research and open and bold collaboration. Through programs like the Amazon Research Awards program, we aim to support academic research that could contribute to our understanding of these complex issues,” said Kommy Weldemariam, Director of Science and Innovation Sustainability. “The selected proposals represent innovative projects that we hope will help advance knowledge in this field, potentially benefiting customers, communities, and the environment.”

ARA funds proposals throughout the year in a variety of research areas. Applicants are encouraged to visit the ARA call for proposals page for more information or send an email to be notified of future open calls.

The tables below list, in alphabetical order by last name, fall 2024 cycle call-for-proposal recipients, sorted by research area.

AI for Information Security

Recipient University Research title
Christopher Amato Northeastern University Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms
Bernd Bischl Ludwig Maximilian University of Munich Improving Generative and Foundation Models Reliability via Uncertainty-awareness
Shiqing Ma University Of Massachusetts Amherst LLM and Domain Adaptation for Attack Detection
Alina Oprea Northeastern University Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms
Roberto Perdisci University of Georgia ContextADBench: A Comprehensive Benchmark Suite for Contextual Anomaly Detection

Automated Reasoning

Recipient University Research title
Nada Amin Harvard University LLM-Augmented Semi-Automated Proofs for Interactive Verification
Suguman Bansal Georgia Institute of Technology Certified Inductive Generalization in Reinforcement Learning
Ioana Boureanu University of Surrey Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems
Omar Haider Chowdhury Stony Brook University Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege
Stefan Ciobaca Alexandru Ioan Cuza University An Interactive Proof Mode for Dafny
João Ferreira INESC-ID Polyglot Automated Program Repair for Infrastructure as Code
Sicun Gao University Of California, San Diego Monte Carlo Trees with Conflict Models for Proof Search
Mirco Giacobbe University of Birmingham Neural Software Verification
Tobias Grosser University of Cambridge Synthesis-based Symbolic BitVector Simplification for Lean
Ronghui Gu Columbia University Scaling Formal Verification of Security Properties for Unmodified System Software
Alexey Ignatiev Monash University Huub: Next-Gen Lazy Clause Generation
Kenneth McMillan University of Texas At Austin Synthesis of Auxiliary Variables and Invariants for Distributed Protocol Verification
Alexandra Mendes University of Porto Overcoming Barriers to the Adoption of Verification-Aware Languages
Jason Nieh Columbia University Scaling Formal Verification of Security Properties for Unmodified System Software
Rohan Padhye Carnegie Mellon University Automated Synthesis and Evaluation of Property-Based Tests
Nadia Polikarpova University Of California, San Diego Discovering and Proving Critical System Properties with LLMs
Fortunat Rajaona University of Surrey Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems
Subhajit Roy Indian Institute of Technology Kanpur Theorem Proving Modulo LLM
Gagandeep Singh University of Illinois At Urbana–Champaign Trustworthy LLM Systems using Formal Contracts
Scott Stoller Stony Brook University Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege
Peter Stuckey Monash University Huub: Next-Gen Lazy Clause Generation
Yulei Sui University of New South Wales Path-Sensitive Typestate Analysis through Sparse Abstract Execution
Nikos Vasilakis Brown University Semantics-Driven Static Analysis for the Unix/Linux Shell
Ping Wang Stevens Institute of Technology Leveraging Large Language Models for Reasoning Augmented Searching on Domain-specific NoSQL Database
John Wawrzynek University of California, Berkeley GPU-Accelerated High-Throughput SAT Sampling

AWS AI

Recipient University Research title
Panagiotis Adamopoulos Emory University Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations
Vikram Adve University of Illinois at Urbana–Champaign Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models
Frances Arnold California Institute of Technology Closed-loop Generative Machine Learning for De Novo Enzyme Discovery and Optimization
Yonatan Bisk Carnegie Mellon University Useful, Safe, and Robust Multiturn Interactions with LLMs
Shiyu Chang University of California, Santa Barbara Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing
Yuxin Chen University of Pennsylvania Provable Acceleration of Diffusion Models for Modern Generative AI
Tianlong Chen University of North Carolina at Chapel Hill Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing
Mingyu Ding University of North Carolina at Chapel Hill Aligning Long Videos and Language as Long-Horizon World Models
Nikhil Garg Cornell University Market Design for Responsible Multi-agent LLMs
Jessica Hullman Northwestern University Human-Aligned Uncertainty Quantification in High Dimensions
Christopher Jermaine Rice University Fast, Trusted AI Using the EINSUMMABLE Compiler
Yunzhu Li Columbia University Physics-Informed Foundation Models Through Embodied Interactions
Pattie Maes Massachusetts Institute of Technology Understanding How LLM Agents Deviate from Human Choices
Sasa Misailovic University of Illinois at Urbana–Champaign Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models
Kristina Monakhova Cornell University Trustworthy extreme imaging for science using interpretable uncertainty quantification
Todd Mowry Carnegie Mellon University Efficient LLM Serving on Trainium via Kernel Generation
Min-hwan Oh Seoul National University Mutually Beneficial Interplay Between Selection Fairness and Context Diversity in Contextual Bandits
Patrick Rebeschini University of Oxford Optimal Regularization for LLM Alignment
Jose Renau University of California, Santa Cruz Verification Constrained Hardware Optimization using Intelligent Design Agentic Programming
Vilma Todri Emory University Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations
Aravindan Vijayaraghavan Northwestern University Human-Aligned Uncertainty Quantification in High Dimensions
Wei Yang University of Texas at Dallas Optimizing RISC-V Compilers with RISC-LLM and Syntax Parsing
Huaxiu Yao University of North Carolina at Chapel Hill Aligning Long Videos and Language as Long-Horizon World Models
Amy Zhang University of Washington Tools for Governing AI Agent Autonomy
Ruqi Zhang Purdue University Efficient Test-time Alignment for Large Language Models and Large Multimodal Models
Zheng Zhang Rutgers University-New Brunswick AlphaQC: An AI-powered Quantum Circuit Optimizer and Denoiser

AWS Cryptography

Recipient University Research title
Alexandra Boldyreva Georgia Institute of Technology Quantifying Information Leakage in Searchable Encryption Protocols
Maria Eichlseder Graz University of Technology, Austria SALAD – Systematic Analysis of Lightweight Ascon-based Designs
Venkatesan Guruswami University of California, Berkeley Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing
Joseph Jaeger Georgia Institute of Technology Analyzing Chat Encryption for Group Messaging
Aayush Jain Carnegie Mellon Large Scale Multiparty Silent Preprocessing for MPC from LPN
Huijia Lin University of Washington Large Scale Multiparty Silent Preprocessing for MPC from LPN
Hamed Nemati KTH Royal Institute of Technology Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary
Karl Palmskog KTH Royal Institute of Technology Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary
Chris Peikert University of Michigan, Ann Arbor Practical Third-Generation FHE and Bootstrapping
Dimitrios Skarlatos Carnegie Mellon University Scale-Out FHE LLMs on GPUs
Vinod Vaikuntanathan Massachusetts Institute of Technology Can Quantum Computers (Really) Factor?
Daniel Wichs Northeastern University Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing
David Wu University Of Texas At Austin Fast Private Information Retrieval and More using Homomorphic Encryption

Sustainability

Recipient University Research title
Meeyoung Cha Max Planck Institute Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring
Jingrui He University of Illinois at Urbana–Champaign Foundation Model Enabled Earth’s Ecosystem Monitoring
Pedro Lopes University of Chicago AI-powered Tools that Enable Engineers to Make & Re-make Sustainable Hardware
Cheng Yaw Low Max Planck Institute Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring





Source link

Continue Reading

Events & Conferences

Independent evaluations demonstrate Nova Premier’s safety

Published

on


AI safety is a priority at Amazon. Our investment in safe, transparent, and responsible AI (RAI) includes collaboration with the global community and policymakers. We are members of and collaborate with organizations such as the Frontier Model Forum, the Partnership on AI, and other forums organized by government agencies such as the National Institute of Standards and Technology (NIST). Consistent with Amazon’s endorsement of the Korea Frontier AI Safety Commitments, we published our Frontier Model Safety Framework earlier this year.

Amazon Nova Premier’s guardrails help prevent generation of unsafe content.

During the development of the Nova Premier model, we conducted a comprehensive evaluation to assess its performance and safety. This included testing on both internal and public benchmarks and internal/automated and third-party red-teaming exercises. Once the final model was ready, we prioritized obtaining unbiased, third-party evaluations of the model’s robustness against RAI controls. In this post, we outline the key findings from these evaluations, demonstrating the strength of our testing approach and Amazon Premier’s standing as a safe model. Specifically, we cover our evaluations with two third-party evaluators: PRISM AI and ActiveFence.

Evaluation of Nova Premier against PRISM AI

PRISM Eval’s Behavior Elicitation Tool (BET) dynamically and systematically stress-tests AI models’ safety guardrails. The methodology focuses on measuring how many adversarial attempts (steps) it takes to get a model to generate harmful content across several key risk dimensions. The central metric is “steps to elicit” — the number of increasingly sophisticated prompting attempts required before a model generates an inappropriate response. A higher number of steps indicates stronger safety measures, as the model is more resistant to manipulation. The PRISM risk dimensions (inspired by the MLCommons AI Safety Benchmarks) include CBRNE weapons, violent crimes, non-violent crimes, defamation, and hate, amongst several others.

Related content

From reinforcement learning and supervised fine-tuning to guardrail models and image watermarking, responsible AI was foundational to the design and development of the Amazon Nova family of models.

Using the BET Eval tool and its V1.0 metric, which is tailored toward non-reasoning models, we compared the recently released Nova models (Pro and Premier) to the latest models in the same class: Claude (3.5 v2 and 3.7 non-reasoning) and Llama4 Maverick, all available through Amazon Bedrock. PRISM BET conducts black-box evaluations (where model developers don’t have access to the test prompts) of models integrated with their API. The evaluation conducted with BET Eval MAX, PRISM’s most comprehensive/aggressive testing suite, revealed significant variations in safety against malicious instructions. Nova models demonstrated superior overall safety performance, with an average of 43 steps for Premier and 52 steps for Pro, compared to 37.7 for Claude 3.5 v2 and fewer than 12 steps for other models in the comparison set (namely, 9.9 for Claude3.7, 11.5 for Claude 3.7 thinking, and 6.5 for Maverick). This higher step count suggests that on average, Nova’s safety guardrails are more sophisticated and harder to circumvent through adversarial prompting. The figure below presents the number of steps per harm category evaluated through BET Eval MAX.

Results of tests using PRISM’s BET Eval MAX testing suite.

The PRISM evaluation provides valuable insights into the relative safety of different Amazon Bedrock models. Nova’s strong performance, particularly in hate speech and defamation resistance, represents meaningful progress in AI safety. However, the results also highlight the ongoing challenge of building truly robust safety measures into AI systems. As the field continues to evolve, frameworks like BET will play an increasingly important role in benchmarking and improving AI safety. As a part of this collaboration Nicolas Miailhe, CEO of PRISM Eval, said, “It’s incredibly rewarding for us to see Nova outperforming strong baselines using the BET Eval MAX; our aim is to build a long-term partnership toward safer-by-design models and to make BET available to various model providers.” Organizations deploying AI systems should carefully consider these safety metrics when selecting models for their applications.

Manual red teaming with ActiveFence

The AI safety & security company ActiveFence benchmarked Nova Premier on Bedrock on prompts distributed across Amazon’s eight core RAI categories. ActiveFence also evaluated Claude 3.7 (non-reasoning mode) and GPT 4.1 API on the same set. The flag rate on Nova Premier was lower than that on the other two models, indicating that Nova Premier is the safest of the three.

Model 3P Flag Rate [↓ is better]
Nova Premier 12.0%
Sonnet 3.7 (non-reasoning) 20.6%
GPT4.1 API 22.4%

Related content

Generative AI raises new challenges in defining, measuring, and mitigating concerns about fairness, toxicity, and intellectual property, among other things. But work has started on the solutions.

“Our role is to think like an adversary but act in service of safety,” said Guy Paltieli from ActiveFence. “By conducting a blind stress test of Nova Premier under realistic threat scenarios, we helped evaluate its security posture in support of Amazon’s broader responsible-AI goals, ensuring the model could be deployed with greater confidence.”

These evaluations conducted with PRISM and ActiveFence give us confidence in the strength of our guardrails and our ability to protect our customers’ safety when they use our models. While these evaluations demonstrate strong safety performance, we recognize that AI safety is an ongoing challenge requiring continuous improvement. These assessments represent a point-in-time snapshot, and we remain committed to regular testing and enhancement of our safety measures. No AI system can guarantee perfect safety in all scenarios, which is why we maintain monitoring and response systems after deployment.

Acknowledgments: Vincent Ponzo, Elyssa Vincent





Source link

Continue Reading

Trending