Connect with us

Events & Conferences

“Who we are shapes what we say and how we say it”

Published

on


To hear Shrikanth Narayanan describe it, every single human conversation is a feat of engineering — a complex system for creating and interpreting a dizzying array of signals.

“When I’m speaking, I’m producing this audio signal, which you’re able to make sense out of by processing it in your auditory system and neural systems,” Narayanan says. “Meanwhile, you’re decoding my intent and emotions. I’ve always been fascinated by that.”

Narayanan uses signal processing and machine learning to better understand this sort of real-world information transfer as university professor and Niki & C. L. Max Nikias Chair in Engineering at the University of Southern California (USC).

In 2020, his lab earned an Amazon Research Award for work on creating “inclusive human-AI conversational experiences for children.” Today, he continues to collaborate with Amazon researchers through The Center for Secure and Trusted Machine Learning at the USC Viterbi School of Engineering. He’s also gained a reputation for training future Amazon scientists, with dozens of his former students now working full time for the company.

They’re finding new approaches to machine learning privacy, security, and trustworthiness that are helping to shape a future that Narayanan hopes will be more equitable, more secure, and more empathetic.

A signal with ‘complex underpinnings’

Narayanan recalls being fascinated by the scientific side of the human experience as early as high school. At the time, he says, he was mainly interested in our physiology. But in retrospect, he says, his curiosity had the tenor of a tinkering engineer.

Related content

With little training data and no mapping of speech to phonemes, Amazon researchers used voice conversion to generate Irish-accented training data in Alexa’s own voice.

“I was always interested in how it all worked,” he says. “I wanted to know how the heart worked, what happened in the brain, how it worked together. I was looking at humans through this lens of systems — the information flow that happens within individuals and between individuals.”

It was in the early ‘90s, while he was pursuing a PhD in electrical engineering at the University of California, Los Angeles, that he managed to combine his diverse interests.
“I was training in electrical engineering, but I really wanted the chance to look at something more directly connected to those human systems,” he says. He got the chance to intern at AT&T Bell Laboratories and realized human language held all the sorts of mysteries he’d been hoping to help solve.

Related content

Alexa Fund company unlocks voice-based computing for people who have trouble using their voices.

“Human speech is a signal that has these complex underpinnings,” he says. “There’s a cognitive aspect, the mind, and motoric aspects. We use the vocal instrument to create the signal, which in turn gets processed by people.”

Narayanan was fascinated by all the data involved in helping a conversation go right — and how easily conversations can go wrong.

He also became interested in the ways developmental disorders and health conditions could change the process of creating and interpreting speech, as well as how the rich diversity of human cultural contexts could impact the efficacy of voice recognition and synthesis.

In 2000, Narayanan founded USC’s Signal Analysis and Interpretation Laboratory (SAIL) to focus “on human-centered signal and information processing that address key societal needs.”

Over the last two decades, SAIL has enabled advances in audio, speech, language, image, video and bio signal processing, human and environment sensing and imaging, and human-centered machine learning. The lab also applies their findings to create “technologies that are inclusive, and technologies that support inclusion,” Narayanan says.

Related content

In a top-3% paper at ICASSP, Amazon researchers adapt graph-based label propagation to improve speech recognition on underrepresented pronunciations.

By that, he means that in addition to making sure technologies like voice recognition actually work for everyone — some of his earliest work involved helping AI pick up on a speaker’s emotional state regardless of their spoken language — he uses signal analysis and interpretation to help uncover and spotlight inequality.

In 2017, SAIL created algorithms for analyzing movie script dialogue in order to measure representation of BIPOC characters. Another SAIL tool analyzed footage directly to track and tally female screen time and speaking time.

In 2019, the lab reported that an algorithm trained on human speech patterns could predict whether or not couples facing hard times would actually stay together. It did so even better than a trained therapist presented with video recordings of the couples in question. Instead of interpreting the content of the discussions —or any visual cues— the algorithm focused on factors like cadence and pitch. A similar tool predicted changes in mental well-being in psychiatric patients as well as human physicians could.

Building trust in AI

“Even if we speak the same language,” Narayanan says, “who we are shapes what we say and how we say it. And this is particularly fascinating for children, because their speech represents a moving target with ongoing developmental changes.”

Even if we speak the same language, who we are shapes what we say and how we say it. And this is particularly fascinating for children, because their speech represents a moving target with ongoing developmental changes.

It’s not just that a child’s vocal instrument is constantly changing as they grow. They’re also developing cognitively and socially. That can mean rapid shifts in the words they use and how they use them. When you add in other factors that might make those speech shifts different from the already diverse average —cultural contexts, speaking or hearing impairments, cognitive differences, or developmental delays — training a voice assistant to effectively communicate with kids poses a real challenge.

The analysis gets even more complicated when interacting with two humans at once, especially if one is an adult and one is a child. Using Amazon Elastic Compute Cloud (Amazon EC2) to process their data, SAIL made advances in core competences like automatic speech recognition to improve speaker diarization — the process of partitioning audio of human speech to determine which person is speaking when.

Related content

Alexa Fund company’s assisted reality tech could unlock speech for hundreds of millions of people who struggle to communicate.

In 2021, SAIL also published a detailed empirical study of children’s speech recognition. They found that the state-of-the-art end-to-end systems setting high benchmarks on adult speech had serious shortcomings when it came to understanding children. The following year, the lab proposed a novel technique for estimating a child’s age based on temporal variability in their speech.

By measuring the same aspects of speech that make children difficult for AI to interact with — like variations in pause length and the time it takes to pronounce certain sounds — his team was able to reliably measure a child’s developmental stage. That could help AI adapt to the needs of users with less sophisticated language skills. Because the analysis relies on signals that can be stripped of other identifying information, the method also has the potential to help protect a child’s privacy.

Narayanan refers to this and similar projects as “trustworthy speech processing,” and says he and collaborators he’s found through Amazon are working to spread interest in the idea across their booming field. In March, the International Speech Communication Association (ISCA) awarded him their ISCA Medal for Scientific Achievement — the group’s most prestigious award — for his sustained and diverse contributions to speech communication science and technology and its application to human-centered engineering systems. He will receive the medal and deliver the opening keynote lecture in August at Interspeech 2023, held in Dublin, Ireland.

Narayanan notes that the last five years have seen radical changes in our ability to gather and analyze information about human behavior.

Related content

Generative AI raises new challenges in defining, measuring, and mitigating concerns about fairness, toxicity, and intellectual property, among other things. But work has started on the solutions.

“The technology systems have made this sort of engineering leap and allowed applications we hadn’t even imagined yet,” he says. “All these people are interacting with these devices in open, real-world environments, and we have the machine learning and deep learning advances to actually use that audio data.”

The next big challenge, he says, is figuring out how to process that data in a way that not only serves the user, but ensures their trust. In addition to continuing to study how various developmental differences might impact voice recognition—and how AI can learn to adapt to them—Narayanan hopes to find new ways to mask as much user data as possible for privacy while pulling out the signals that voice assistants need.

Ushering in the next generation of researchers

Working with Amazon enables Narayanan’s lab to explore key research themes through a practical lens. He notes that collaborations of this nature provide academics like himself with the time and support to tackle complex, delicate research questions — such as those involving children and other vulnerable populations.

In addition, Naraynan’s graduate students get to work directly with Amazon scientists to understand the potential practical applications of their research.

“This kind of partnership really takes research to the next level,” he says.

The AI revolution that’s happening has a very nice connection to what’s happening at Amazon, so naturally it was a place where my students found the most exciting challenges and opportunities.

Narayanan has also encouraged dozens of his students to pursue internships at Amazon to explore what industry has to offer. Just as his time at Bell Laboratories helped to crystalize his own interests, he says, he’s watched countless young engineers find exciting new applications for their skills at Amazon.

What started as a gentle nudge to consider Amazon internships and job postings has grown into a steady pipeline of Amazon hires — one that Narayanan says owes entirely to the merits of his lab’s alums.

Angeliki Metallinou, a senior applied science manager for Alexa AI, joined Amazon fulltime in 2014 with Narayanan’s encouragement. Alexa was a top-secret project at the time, so she didn’t know exactly what she’d be working on until she got there. She credits Narayanan with encouraging her to dive in.

Related content

How he parlayed an internship to land an expanded role at Amazon while pursuing his master’s degree.

“As a student, I hadn’t realized the extent that Amazon scientists collaborate with academia and are able to publish their work at top tier venues and conferences,” she recalls. “I wasn’t even aware that there was such a strong science community here. But Shri already had a few former PhD students working at Amazon, and he recommended it as a great place for an industry career.”

Rahul Gupta, a senior applied scientist for Amazon Alexa, first connected with Amazon for an internship near the end of his SAIL PhD in 2015. These days, he says, he has one or two SAIL students doing summer internships in his group alone.

“There’s really good cultural alignment between SAIL and Amazon,” Gupta says.

Narayanan, who proudly displays photos of all of his lab graduates on the wall of his office, admits he’s lost count of how many have worked at Amazon over the years.

“It’s exciting,” he says. “The AI revolution that’s happening has a very nice connection to what’s happening at Amazon, so naturally it was a place where my students found the most exciting challenges and opportunities. But I’ve also seen many of them progress into leadership positions, which I did my best to set them up for — I always encourage creativity and collaboration, and I don’t micromanage them in my lab.”

Now that his graduates are thriving at Amazon, he says, the internship opportunities for his current students are all the more robust.

“It sustains itself,” he says. “They shine in what they do at Amazon and in the community, and that connects back to the lab. It’s incredibly exciting.”





Source link

Events & Conferences

An inside look at Meta’s transition from C to Rust on mobile

Published

on


Have you ever worked is legacy code? Are you curious what it takes to modernize systems at a massive scale?

Pascal Hartig is joined on the latest Meta Tech Podcast by Elaine and Buping, two software engineers working on a bold project to rewrite the decades-old C code in one of Meta’s core messaging libraries in Rust. It’s an ambitious effort that will transform a central messaging library that is shared across Messenger, Facebook, Instagram, and Meta’s AR/VR platforms.

They discuss taking on a project of this scope – even without a background in Rust, how they’re approaching it, and what it means to optimize for ‘developer happiness.’

Download or listen to the episode below:

You can also find the episode wherever you get your podcasts, including:

The Meta Tech Podcast is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features.

Send us feedback on InstagramThreads, or X.

And if you’re interested in learning more about career opportunities at Meta visit the Meta Careers page.





Source link

Continue Reading

Events & Conferences

Amazon Research Awards recipients announced

Published

on


Amazon Research Awards (ARA) provides unrestricted funds and AWS Promotional Credits to academic researchers investigating various research topics in multiple disciplines. This cycle, ARA received many excellent research proposals from across the world and today is publicly announcing 73 award recipients who represent 46 universities in 10 countries.

This announcement includes awards funded under five call for proposals during the fall 2024 cycle: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography, and Sustainability. Proposals were reviewed for the quality of their scientific content and their potential to impact both the research community and society. Additionally, Amazon encourages the publication of research results, presentations of research at Amazon offices worldwide, and the release of related code under open-source licenses.

Recipients have access to more than 700 Amazon public datasets and can utilize AWS AI/ML services and tools through their AWS Promotional Credits. Recipients also are assigned an Amazon research contact who offers consultation and advice, along with opportunities to participate in Amazon events and training sessions.

Recommended reads

In both black-box stress testing and red-team exercises, Nova Premier comes out on top.

“Automated Reasoning is an important area of research for Amazon, with potential applications across various features and applications to help improve security, reliability, and performance for our customers. Through the ARA program, we collaborate with leading academic researchers to explore challenges in this field,” said Robert Jones, senior principal scientist with the Cloud Automated Reasoning Group. “We were again impressed by the exceptional response to our Automated Reasoning call for proposals this year, receiving numerous high-quality submissions. Congratulations to the recipients! We’re excited to support their work and partner with them as they develop new science and technology in this important area.”

Recommended reads

IAM Access Analyzer feature uses automated reasoning to recommend policies that remove unused accesses, helping customers achieve “least privilege”.

“At Amazon, we believe that solving the world’s toughest sustainability challenges benefits from both breakthrough scientific research and open and bold collaboration. Through programs like the Amazon Research Awards program, we aim to support academic research that could contribute to our understanding of these complex issues,” said Kommy Weldemariam, Director of Science and Innovation Sustainability. “The selected proposals represent innovative projects that we hope will help advance knowledge in this field, potentially benefiting customers, communities, and the environment.”

ARA funds proposals throughout the year in a variety of research areas. Applicants are encouraged to visit the ARA call for proposals page for more information or send an email to be notified of future open calls.

The tables below list, in alphabetical order by last name, fall 2024 cycle call-for-proposal recipients, sorted by research area.

AI for Information Security

Recipient University Research title
Christopher Amato Northeastern University Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms
Bernd Bischl Ludwig Maximilian University of Munich Improving Generative and Foundation Models Reliability via Uncertainty-awareness
Shiqing Ma University Of Massachusetts Amherst LLM and Domain Adaptation for Attack Detection
Alina Oprea Northeastern University Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms
Roberto Perdisci University of Georgia ContextADBench: A Comprehensive Benchmark Suite for Contextual Anomaly Detection

Automated Reasoning

Recipient University Research title
Nada Amin Harvard University LLM-Augmented Semi-Automated Proofs for Interactive Verification
Suguman Bansal Georgia Institute of Technology Certified Inductive Generalization in Reinforcement Learning
Ioana Boureanu University of Surrey Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems
Omar Haider Chowdhury Stony Brook University Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege
Stefan Ciobaca Alexandru Ioan Cuza University An Interactive Proof Mode for Dafny
João Ferreira INESC-ID Polyglot Automated Program Repair for Infrastructure as Code
Sicun Gao University Of California, San Diego Monte Carlo Trees with Conflict Models for Proof Search
Mirco Giacobbe University of Birmingham Neural Software Verification
Tobias Grosser University of Cambridge Synthesis-based Symbolic BitVector Simplification for Lean
Ronghui Gu Columbia University Scaling Formal Verification of Security Properties for Unmodified System Software
Alexey Ignatiev Monash University Huub: Next-Gen Lazy Clause Generation
Kenneth McMillan University of Texas At Austin Synthesis of Auxiliary Variables and Invariants for Distributed Protocol Verification
Alexandra Mendes University of Porto Overcoming Barriers to the Adoption of Verification-Aware Languages
Jason Nieh Columbia University Scaling Formal Verification of Security Properties for Unmodified System Software
Rohan Padhye Carnegie Mellon University Automated Synthesis and Evaluation of Property-Based Tests
Nadia Polikarpova University Of California, San Diego Discovering and Proving Critical System Properties with LLMs
Fortunat Rajaona University of Surrey Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems
Subhajit Roy Indian Institute of Technology Kanpur Theorem Proving Modulo LLM
Gagandeep Singh University of Illinois At Urbana–Champaign Trustworthy LLM Systems using Formal Contracts
Scott Stoller Stony Brook University Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege
Peter Stuckey Monash University Huub: Next-Gen Lazy Clause Generation
Yulei Sui University of New South Wales Path-Sensitive Typestate Analysis through Sparse Abstract Execution
Nikos Vasilakis Brown University Semantics-Driven Static Analysis for the Unix/Linux Shell
Ping Wang Stevens Institute of Technology Leveraging Large Language Models for Reasoning Augmented Searching on Domain-specific NoSQL Database
John Wawrzynek University of California, Berkeley GPU-Accelerated High-Throughput SAT Sampling

AWS AI

Recipient University Research title
Panagiotis Adamopoulos Emory University Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations
Vikram Adve University of Illinois at Urbana–Champaign Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models
Frances Arnold California Institute of Technology Closed-loop Generative Machine Learning for De Novo Enzyme Discovery and Optimization
Yonatan Bisk Carnegie Mellon University Useful, Safe, and Robust Multiturn Interactions with LLMs
Shiyu Chang University of California, Santa Barbara Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing
Yuxin Chen University of Pennsylvania Provable Acceleration of Diffusion Models for Modern Generative AI
Tianlong Chen University of North Carolina at Chapel Hill Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing
Mingyu Ding University of North Carolina at Chapel Hill Aligning Long Videos and Language as Long-Horizon World Models
Nikhil Garg Cornell University Market Design for Responsible Multi-agent LLMs
Jessica Hullman Northwestern University Human-Aligned Uncertainty Quantification in High Dimensions
Christopher Jermaine Rice University Fast, Trusted AI Using the EINSUMMABLE Compiler
Yunzhu Li Columbia University Physics-Informed Foundation Models Through Embodied Interactions
Pattie Maes Massachusetts Institute of Technology Understanding How LLM Agents Deviate from Human Choices
Sasa Misailovic University of Illinois at Urbana–Champaign Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models
Kristina Monakhova Cornell University Trustworthy extreme imaging for science using interpretable uncertainty quantification
Todd Mowry Carnegie Mellon University Efficient LLM Serving on Trainium via Kernel Generation
Min-hwan Oh Seoul National University Mutually Beneficial Interplay Between Selection Fairness and Context Diversity in Contextual Bandits
Patrick Rebeschini University of Oxford Optimal Regularization for LLM Alignment
Jose Renau University of California, Santa Cruz Verification Constrained Hardware Optimization using Intelligent Design Agentic Programming
Vilma Todri Emory University Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations
Aravindan Vijayaraghavan Northwestern University Human-Aligned Uncertainty Quantification in High Dimensions
Wei Yang University of Texas at Dallas Optimizing RISC-V Compilers with RISC-LLM and Syntax Parsing
Huaxiu Yao University of North Carolina at Chapel Hill Aligning Long Videos and Language as Long-Horizon World Models
Amy Zhang University of Washington Tools for Governing AI Agent Autonomy
Ruqi Zhang Purdue University Efficient Test-time Alignment for Large Language Models and Large Multimodal Models
Zheng Zhang Rutgers University-New Brunswick AlphaQC: An AI-powered Quantum Circuit Optimizer and Denoiser

AWS Cryptography

Recipient University Research title
Alexandra Boldyreva Georgia Institute of Technology Quantifying Information Leakage in Searchable Encryption Protocols
Maria Eichlseder Graz University of Technology, Austria SALAD – Systematic Analysis of Lightweight Ascon-based Designs
Venkatesan Guruswami University of California, Berkeley Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing
Joseph Jaeger Georgia Institute of Technology Analyzing Chat Encryption for Group Messaging
Aayush Jain Carnegie Mellon Large Scale Multiparty Silent Preprocessing for MPC from LPN
Huijia Lin University of Washington Large Scale Multiparty Silent Preprocessing for MPC from LPN
Hamed Nemati KTH Royal Institute of Technology Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary
Karl Palmskog KTH Royal Institute of Technology Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary
Chris Peikert University of Michigan, Ann Arbor Practical Third-Generation FHE and Bootstrapping
Dimitrios Skarlatos Carnegie Mellon University Scale-Out FHE LLMs on GPUs
Vinod Vaikuntanathan Massachusetts Institute of Technology Can Quantum Computers (Really) Factor?
Daniel Wichs Northeastern University Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing
David Wu University Of Texas At Austin Fast Private Information Retrieval and More using Homomorphic Encryption

Sustainability

Recipient University Research title
Meeyoung Cha Max Planck Institute Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring
Jingrui He University of Illinois at Urbana–Champaign Foundation Model Enabled Earth’s Ecosystem Monitoring
Pedro Lopes University of Chicago AI-powered Tools that Enable Engineers to Make & Re-make Sustainable Hardware
Cheng Yaw Low Max Planck Institute Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring





Source link

Continue Reading

Events & Conferences

Independent evaluations demonstrate Nova Premier’s safety

Published

on


AI safety is a priority at Amazon. Our investment in safe, transparent, and responsible AI (RAI) includes collaboration with the global community and policymakers. We are members of and collaborate with organizations such as the Frontier Model Forum, the Partnership on AI, and other forums organized by government agencies such as the National Institute of Standards and Technology (NIST). Consistent with Amazon’s endorsement of the Korea Frontier AI Safety Commitments, we published our Frontier Model Safety Framework earlier this year.

Amazon Nova Premier’s guardrails help prevent generation of unsafe content.

During the development of the Nova Premier model, we conducted a comprehensive evaluation to assess its performance and safety. This included testing on both internal and public benchmarks and internal/automated and third-party red-teaming exercises. Once the final model was ready, we prioritized obtaining unbiased, third-party evaluations of the model’s robustness against RAI controls. In this post, we outline the key findings from these evaluations, demonstrating the strength of our testing approach and Amazon Premier’s standing as a safe model. Specifically, we cover our evaluations with two third-party evaluators: PRISM AI and ActiveFence.

Evaluation of Nova Premier against PRISM AI

PRISM Eval’s Behavior Elicitation Tool (BET) dynamically and systematically stress-tests AI models’ safety guardrails. The methodology focuses on measuring how many adversarial attempts (steps) it takes to get a model to generate harmful content across several key risk dimensions. The central metric is “steps to elicit” — the number of increasingly sophisticated prompting attempts required before a model generates an inappropriate response. A higher number of steps indicates stronger safety measures, as the model is more resistant to manipulation. The PRISM risk dimensions (inspired by the MLCommons AI Safety Benchmarks) include CBRNE weapons, violent crimes, non-violent crimes, defamation, and hate, amongst several others.

Related content

From reinforcement learning and supervised fine-tuning to guardrail models and image watermarking, responsible AI was foundational to the design and development of the Amazon Nova family of models.

Using the BET Eval tool and its V1.0 metric, which is tailored toward non-reasoning models, we compared the recently released Nova models (Pro and Premier) to the latest models in the same class: Claude (3.5 v2 and 3.7 non-reasoning) and Llama4 Maverick, all available through Amazon Bedrock. PRISM BET conducts black-box evaluations (where model developers don’t have access to the test prompts) of models integrated with their API. The evaluation conducted with BET Eval MAX, PRISM’s most comprehensive/aggressive testing suite, revealed significant variations in safety against malicious instructions. Nova models demonstrated superior overall safety performance, with an average of 43 steps for Premier and 52 steps for Pro, compared to 37.7 for Claude 3.5 v2 and fewer than 12 steps for other models in the comparison set (namely, 9.9 for Claude3.7, 11.5 for Claude 3.7 thinking, and 6.5 for Maverick). This higher step count suggests that on average, Nova’s safety guardrails are more sophisticated and harder to circumvent through adversarial prompting. The figure below presents the number of steps per harm category evaluated through BET Eval MAX.

Results of tests using PRISM’s BET Eval MAX testing suite.

The PRISM evaluation provides valuable insights into the relative safety of different Amazon Bedrock models. Nova’s strong performance, particularly in hate speech and defamation resistance, represents meaningful progress in AI safety. However, the results also highlight the ongoing challenge of building truly robust safety measures into AI systems. As the field continues to evolve, frameworks like BET will play an increasingly important role in benchmarking and improving AI safety. As a part of this collaboration Nicolas Miailhe, CEO of PRISM Eval, said, “It’s incredibly rewarding for us to see Nova outperforming strong baselines using the BET Eval MAX; our aim is to build a long-term partnership toward safer-by-design models and to make BET available to various model providers.” Organizations deploying AI systems should carefully consider these safety metrics when selecting models for their applications.

Manual red teaming with ActiveFence

The AI safety & security company ActiveFence benchmarked Nova Premier on Bedrock on prompts distributed across Amazon’s eight core RAI categories. ActiveFence also evaluated Claude 3.7 (non-reasoning mode) and GPT 4.1 API on the same set. The flag rate on Nova Premier was lower than that on the other two models, indicating that Nova Premier is the safest of the three.

Model 3P Flag Rate [↓ is better]
Nova Premier 12.0%
Sonnet 3.7 (non-reasoning) 20.6%
GPT4.1 API 22.4%

Related content

Generative AI raises new challenges in defining, measuring, and mitigating concerns about fairness, toxicity, and intellectual property, among other things. But work has started on the solutions.

“Our role is to think like an adversary but act in service of safety,” said Guy Paltieli from ActiveFence. “By conducting a blind stress test of Nova Premier under realistic threat scenarios, we helped evaluate its security posture in support of Amazon’s broader responsible-AI goals, ensuring the model could be deployed with greater confidence.”

These evaluations conducted with PRISM and ActiveFence give us confidence in the strength of our guardrails and our ability to protect our customers’ safety when they use our models. While these evaluations demonstrate strong safety performance, we recognize that AI safety is an ongoing challenge requiring continuous improvement. These assessments represent a point-in-time snapshot, and we remain committed to regular testing and enhancement of our safety measures. No AI system can guarantee perfect safety in all scenarios, which is why we maintain monitoring and response systems after deployment.

Acknowledgments: Vincent Ponzo, Elyssa Vincent





Source link

Continue Reading

Trending