Events & Conferences
Cracking the code of how diseases affect the body
Early in her career, computer scientist Marinka Zitnik confronted a biomedical mystery: among 12,000 genes, which handful played a role in the response of a model organism to bacterial infection? A genuine needle-in-a-haystack situation.
But when Zitnik fed the biomedical data into a machine learning algorithm of her own devising, it predicted eight genes most likely to be involved. When those candidates were tested in the lab, the research team found that six of them were indeed implicated in the infection. Her method had proven sensationally successful.
“As someone who was trained in computer science at the time, it was so rewarding to make an impact in another area,” says Zitnik. “It was a turning point for me.”
That turning point, in 2013, led to a decade of research in machine learning and to Zitnik’s current role as assistant professor of biomedical informatics at Harvard Medical School. At Harvard’s Zitnik Lab, she is focused on how machine learning can enable accurate diagnoses and the development of new treatments and therapies. And with the support of an Amazon Research Award, she is working to unlock the potential of AI-augmented drug discovery at the global scale through the online platform Therapeutics Data Commons.
Today, of course, bioinformatics is an established and growing discipline. But during Zitnik’s final year at high school it was a magic word, one she hadn’t heard before, that suddenly revealed how she could combine her passion for computers, programming, and mathematics with her ambition to make a big impact on society.
“I stumbled across a lecture given by a university recruiter, and I learned this word. Bioinformatics combines computation and biology. It was an emerging area that really sparked my interest,” says Zitnik. Following her subsequent degree in computer science and mathematics at the University of Ljubljana, Slovenia, she stayed and started a PhD in computer science in 2012, all the while with medicine in mind.
“I wanted to deeply understand the complex problems in biology and medicine that I could use computation to help solve,” Zitnik says.
Bottlenecks and challenges
Early in Zitnik’s PhD, she published several machine learning papers that were read by scientists at a variety of biomedical institutions. Many reached out to invite her to their labs to collaborate in applying her algorithms to their data. During her PhD, Zitnik joined forces with clinicians, biomedical researchers, geneticists, and computer scientists around the world, including Stanford University and Imperial College London.
“I wanted to learn about the process of fundamental biological discovery in a lab — the bottlenecks and the challenges,” she says.
One of these collaborations — with Baylor College of Medicine in Houston, Texas — was particularly encouraging: the 12,000-gene challenge. The conventional approach would have required many thousands of screening experiments, testing each gene in turn. The success of Zitnik’s algorithms meant the saving of a great deal of time and resources.
“That was the first time I saw that coupling AI predictions with experimental biological work in the lab can improve experimental yield by an order of magnitude,” says Zitnik.
Fast forward to 2019, when Zitnik arrived at Harvard University to set up her lab. Zitnik focused on two closely linked areas of medicine that could also benefit from AI. One is how machine learning can enable an accurate diagnosis for a patient based on a wide variety of information, from their genetic code and blood test results to their medical history and lifestyle data. The second area involves identifying and developing possible treatments and therapies for these diagnoses.
Therapeutics Data Commons
More than this, though, Zitnik wanted to unlock the potential of AI-augmented medicine at the global scale. From her early work with the biomedical community, she understood all too well the difficulty in accessing and curating high-quality medical data to train ML models. She addressed these twin challenges head on, leveraging Amazon Elastic Compute Cloud (EC2) and AWS ML deployment tools via her Amazon Research Award to launch Therapeutics Data Commons (TDC), an international initiative to access and evaluate AI capability across therapeutic modalities and stages of discovery.
At its core, TDC is a collection of open-source data sets and state-of-the-art ML models focused on drug discovery and development, accompanied by a broader ecosystem of resources and tools that include benchmarking and leader boards for cutting-edge ML models.
“It’s a meeting point between biomedical and biochemical researchers, and machine learning scientists,” says Zitnik. “It’s a thriving community.”
TDC is the largest open-source platform of its kind in the world. Zitnik runs it with collaborating institutions including MIT, Stanford University, Georgia Institute of Technology, Cornell University, University of Illinois Urbana-Champaign, and Carnegie Mellon University, and with additional support from the pharmaceutical industry and tech companies. TDC covers the entire process of drug discovery and development, from identifying potentially therapeutic molecules to the optimizing and planning of laboratory experiments.
The platform holds data from anonymized electronic health records, medical imaging, genomics, clinical trials data, and lots more. Biomedical researchers can use TDC’s data, or bring their own data and challenges, and collaborate with ML scientists to increase the speed of drug discovery while also reducing the otherwise enormous cost of bringing new drugs to market. It has already been used by more than 200,000 scientists worldwide, says Zitnik.
Help for rare diseases
Zitnik is also keen to use her technology to help patients and clinicians working on rare diseases. There are over 7,000 rare diseases in the world, says Zitnik. Each of them has a small number of known cases, but collectively they affect many people. Could AI help here?
To develop a diagnostic model for a common disease typically requires data from thousands of patients, labelled with that diagnosis. For rare diseases, that labelled patient data simply doesn’t exist. “This problem cannot be solved by throwing more money at it,” says Zitnik. “It requires a new way of thinking.”
Instead, Zitnik and her team, which includes postdoctoral fellow Emily Alsentzer and graduate researcher Michelle Li, are incorporating medical principles and prior scientific knowledge about biological interactions, chemistry, genetics, patient symptoms, and drug interactions into the neural architecture of their models.
“This allows us to train sophisticated deep learning models using very little amounts of labelled patient data, and sometimes no patient data at all,” says Zitnik.
A collaboration with a Harvard-led study called the Undiagnosed Diseases Network (UDN) has shown that the approach works. Someone with a rare genetic disease that has defied diagnosis at the local level can be referred to the UDN’s network of clinical and research experts across 12 U.S. clinical sites. A diagnosis can resolve the burden of uncertainty for the patient and hopefully unlock the possibility of treatments. Of the 2,500 participants so far accepted into the UDN study, 627 have been successfully diagnosed — each case a hard-fought win.
When Zitnik’s team applied their model to the medical data of 465 of these patients — a data set that excluded their actual diagnosis — the results were striking. The model was asked to predict for each patient the genes mostly likely responsible for their illness. For three-quarters of the patients, the disease-causing gene was in the model’s top five predictions.
“The next stage is to use it in real-world settings to assist the clinical teams in the evaluation of undiagnosed patients,” says Zitnik.
The tool has drawn considerable interest from the medical community, says Zitnik. She is planning pilot studies with clinics in Boston and Israel that are not part of the UDN to further evaluate the model as a diagnostic recommendation tool for new cases. Zitnik is also in discussions with several patient-led foundations centered around individual rare diseases, with the goal of providing them with a suite of user-friendly tools.
That is something Amazon Web Services supports. “When we are looking to deploy a model in biomedical or clinical settings, we use SageMaker,” Zitnik says. Amazon SageMaker can be used to turn ML models into standalone tools for public release, for example, or to place algorithms in cloud-based containers for sharing them with collaborators.
The power of the cloud for biomedical data
Cloud computing more broadly is critical to the work in the Zitnik lab.
“We need to train our models repeatedly on many different kinds of health data, to make sure they perform well across diverse patient populations, diverse chemical structures and so on, even if the input data is relatively messy,” says Zitnik. Her Amazon Research Award provided AWS credits for access to the high-powered parallel computing required by these training-hungry models.
In addition to the launch of TDC, Zitnik’s Amazon award supported discrete research projects. In 2021, as the COVID-19 pandemic raged around the world, Zitnik and her team wanted to know how effective AI methods could be at identifying existing drugs that could be repurposed to treat emerging pathogens. Identifying drugs already on the market or in late-stage clinical trials can save many years, and potentially billions of dollars, compared with developing a drug from scratch.
Zitnik’s team first trained a geometric deep learning model on the human interactome — the complete network of physical interactions between proteins in the human body. These networks tell us what parts of human cells’ machinery are affected by a given drug molecule.
Once the model was trained, they fed it data on over 7,500 existing drugs and their mechanisms of action. Of these drugs, the model predicted and ranked 6,340 candidate drugs. Biomedical researchers screened the top 918 suggestions on cells infected with COVID-19 and found 77 drugs that had a strong or weak effect on the virus. They used these results to fine-tune the model’s predictions, before finally screening the top-ranked drugs in human cells. They identified six drugs that reduced viral infection. Among these, four could, in principle, be repurposed to treat COVID-19.
“It’s an exciting example of how AI can accelerate drug discovery and development. We were able to compress the timeline of this kind of research — from data collection to final models and predictions being tested in the lab — from years to months,” says Zitnik. Three months, in this case.
This is impressive in itself, but the experiment also revealed another aspect of the power of AI approaches.
Cascading network effects
A well-established strategy for drug discovery is to exploit molecular docking. If an infecting pathogen needs to dock with a particular protein on the surface of human cells to proliferate, a therapeutic molecule that docks with that protein instead could block the action of the pathogen. Indeed, Zitnik’s model did identify one drug that bound to the same proteins targeted by SARS-CoV-2. But here’s the kicker — it also found 76 drugs that successfully reduced viral infection through indirect systemic effects.
“One of the biggest outcomes of the work was the discovery of this group of drugs that seem to work through cascading network effects, indirectly impacting the proteins the virus attacks,” says Zitnik. “We call these network drugs. Without algorithms such as graph neural networks, which can make indirect observations and inferences using principles grounded in biomedical knowledge, we would not be able to identify such drugs.”
This new way to approach discovery, powered by biomedical AI, excites Zitnik for the future. She sees the potential for such tools to generate more accurate scientific hypotheses tailored to individual cells, diseases, and patients, and to help bridge the gap between laboratory and clinical settings:
“I can’t wait to see how these developments will continue to shape our world.”
Events & Conferences
An inside look at Meta’s transition from C to Rust on mobile
Have you ever worked is legacy code? Are you curious what it takes to modernize systems at a massive scale?
Pascal Hartig is joined on the latest Meta Tech Podcast by Elaine and Buping, two software engineers working on a bold project to rewrite the decades-old C code in one of Meta’s core messaging libraries in Rust. It’s an ambitious effort that will transform a central messaging library that is shared across Messenger, Facebook, Instagram, and Meta’s AR/VR platforms.
They discuss taking on a project of this scope – even without a background in Rust, how they’re approaching it, and what it means to optimize for ‘developer happiness.’
Download or listen to the episode below:
You can also find the episode wherever you get your podcasts, including:
The Meta Tech Podcast is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features.
Send us feedback on Instagram, Threads, or X.
And if you’re interested in learning more about career opportunities at Meta visit the Meta Careers page.
Events & Conferences
Amazon Research Awards recipients announced
Amazon Research Awards (ARA) provides unrestricted funds and AWS Promotional Credits to academic researchers investigating various research topics in multiple disciplines. This cycle, ARA received many excellent research proposals from across the world and today is publicly announcing 73 award recipients who represent 46 universities in 10 countries.
This announcement includes awards funded under five call for proposals during the fall 2024 cycle: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography, and Sustainability. Proposals were reviewed for the quality of their scientific content and their potential to impact both the research community and society. Additionally, Amazon encourages the publication of research results, presentations of research at Amazon offices worldwide, and the release of related code under open-source licenses.
Recipients have access to more than 700 Amazon public datasets and can utilize AWS AI/ML services and tools through their AWS Promotional Credits. Recipients also are assigned an Amazon research contact who offers consultation and advice, along with opportunities to participate in Amazon events and training sessions.
“Automated Reasoning is an important area of research for Amazon, with potential applications across various features and applications to help improve security, reliability, and performance for our customers. Through the ARA program, we collaborate with leading academic researchers to explore challenges in this field,” said Robert Jones, senior principal scientist with the Cloud Automated Reasoning Group. “We were again impressed by the exceptional response to our Automated Reasoning call for proposals this year, receiving numerous high-quality submissions. Congratulations to the recipients! We’re excited to support their work and partner with them as they develop new science and technology in this important area.”
“At Amazon, we believe that solving the world’s toughest sustainability challenges benefits from both breakthrough scientific research and open and bold collaboration. Through programs like the Amazon Research Awards program, we aim to support academic research that could contribute to our understanding of these complex issues,” said Kommy Weldemariam, Director of Science and Innovation Sustainability. “The selected proposals represent innovative projects that we hope will help advance knowledge in this field, potentially benefiting customers, communities, and the environment.”
ARA funds proposals throughout the year in a variety of research areas. Applicants are encouraged to visit the ARA call for proposals page for more information or send an email to be notified of future open calls.
The tables below list, in alphabetical order by last name, fall 2024 cycle call-for-proposal recipients, sorted by research area.
AI for Information Security
Recipient | University | Research title |
Christopher Amato | Northeastern University | Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms |
Bernd Bischl | Ludwig Maximilian University of Munich | Improving Generative and Foundation Models Reliability via Uncertainty-awareness |
Shiqing Ma | University Of Massachusetts Amherst | LLM and Domain Adaptation for Attack Detection |
Alina Oprea | Northeastern University | Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms |
Roberto Perdisci | University of Georgia | ContextADBench: A Comprehensive Benchmark Suite for Contextual Anomaly Detection |
Automated Reasoning
Recipient | University | Research title |
Nada Amin | Harvard University | LLM-Augmented Semi-Automated Proofs for Interactive Verification |
Suguman Bansal | Georgia Institute of Technology | Certified Inductive Generalization in Reinforcement Learning |
Ioana Boureanu | University of Surrey | Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems |
Omar Haider Chowdhury | Stony Brook University | Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege |
Stefan Ciobaca | Alexandru Ioan Cuza University | An Interactive Proof Mode for Dafny |
João Ferreira | INESC-ID | Polyglot Automated Program Repair for Infrastructure as Code |
Sicun Gao | University Of California, San Diego | Monte Carlo Trees with Conflict Models for Proof Search |
Mirco Giacobbe | University of Birmingham | Neural Software Verification |
Tobias Grosser | University of Cambridge | Synthesis-based Symbolic BitVector Simplification for Lean |
Ronghui Gu | Columbia University | Scaling Formal Verification of Security Properties for Unmodified System Software |
Alexey Ignatiev | Monash University | Huub: Next-Gen Lazy Clause Generation |
Kenneth McMillan | University of Texas At Austin | Synthesis of Auxiliary Variables and Invariants for Distributed Protocol Verification |
Alexandra Mendes | University of Porto | Overcoming Barriers to the Adoption of Verification-Aware Languages |
Jason Nieh | Columbia University | Scaling Formal Verification of Security Properties for Unmodified System Software |
Rohan Padhye | Carnegie Mellon University | Automated Synthesis and Evaluation of Property-Based Tests |
Nadia Polikarpova | University Of California, San Diego | Discovering and Proving Critical System Properties with LLMs |
Fortunat Rajaona | University of Surrey | Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems |
Subhajit Roy | Indian Institute of Technology Kanpur | Theorem Proving Modulo LLM |
Gagandeep Singh | University of Illinois At Urbana–Champaign | Trustworthy LLM Systems using Formal Contracts |
Scott Stoller | Stony Brook University | Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege |
Peter Stuckey | Monash University | Huub: Next-Gen Lazy Clause Generation |
Yulei Sui | University of New South Wales | Path-Sensitive Typestate Analysis through Sparse Abstract Execution |
Nikos Vasilakis | Brown University | Semantics-Driven Static Analysis for the Unix/Linux Shell |
Ping Wang | Stevens Institute of Technology | Leveraging Large Language Models for Reasoning Augmented Searching on Domain-specific NoSQL Database |
John Wawrzynek | University of California, Berkeley | GPU-Accelerated High-Throughput SAT Sampling |
AWS AI
Recipient | University | Research title |
Panagiotis Adamopoulos | Emory University | Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations |
Vikram Adve | University of Illinois at Urbana–Champaign | Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models |
Frances Arnold | California Institute of Technology | Closed-loop Generative Machine Learning for De Novo Enzyme Discovery and Optimization |
Yonatan Bisk | Carnegie Mellon University | Useful, Safe, and Robust Multiturn Interactions with LLMs |
Shiyu Chang | University of California, Santa Barbara | Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing |
Yuxin Chen | University of Pennsylvania | Provable Acceleration of Diffusion Models for Modern Generative AI |
Tianlong Chen | University of North Carolina at Chapel Hill | Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing |
Mingyu Ding | University of North Carolina at Chapel Hill | Aligning Long Videos and Language as Long-Horizon World Models |
Nikhil Garg | Cornell University | Market Design for Responsible Multi-agent LLMs |
Jessica Hullman | Northwestern University | Human-Aligned Uncertainty Quantification in High Dimensions |
Christopher Jermaine | Rice University | Fast, Trusted AI Using the EINSUMMABLE Compiler |
Yunzhu Li | Columbia University | Physics-Informed Foundation Models Through Embodied Interactions |
Pattie Maes | Massachusetts Institute of Technology | Understanding How LLM Agents Deviate from Human Choices |
Sasa Misailovic | University of Illinois at Urbana–Champaign | Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models |
Kristina Monakhova | Cornell University | Trustworthy extreme imaging for science using interpretable uncertainty quantification |
Todd Mowry | Carnegie Mellon University | Efficient LLM Serving on Trainium via Kernel Generation |
Min-hwan Oh | Seoul National University | Mutually Beneficial Interplay Between Selection Fairness and Context Diversity in Contextual Bandits |
Patrick Rebeschini | University of Oxford | Optimal Regularization for LLM Alignment |
Jose Renau | University of California, Santa Cruz | Verification Constrained Hardware Optimization using Intelligent Design Agentic Programming |
Vilma Todri | Emory University | Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations |
Aravindan Vijayaraghavan | Northwestern University | Human-Aligned Uncertainty Quantification in High Dimensions |
Wei Yang | University of Texas at Dallas | Optimizing RISC-V Compilers with RISC-LLM and Syntax Parsing |
Huaxiu Yao | University of North Carolina at Chapel Hill | Aligning Long Videos and Language as Long-Horizon World Models |
Amy Zhang | University of Washington | Tools for Governing AI Agent Autonomy |
Ruqi Zhang | Purdue University | Efficient Test-time Alignment for Large Language Models and Large Multimodal Models |
Zheng Zhang | Rutgers University-New Brunswick | AlphaQC: An AI-powered Quantum Circuit Optimizer and Denoiser |
AWS Cryptography
Recipient | University | Research title |
Alexandra Boldyreva | Georgia Institute of Technology | Quantifying Information Leakage in Searchable Encryption Protocols |
Maria Eichlseder | Graz University of Technology, Austria | SALAD – Systematic Analysis of Lightweight Ascon-based Designs |
Venkatesan Guruswami | University of California, Berkeley | Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing |
Joseph Jaeger | Georgia Institute of Technology | Analyzing Chat Encryption for Group Messaging |
Aayush Jain | Carnegie Mellon | Large Scale Multiparty Silent Preprocessing for MPC from LPN |
Huijia Lin | University of Washington | Large Scale Multiparty Silent Preprocessing for MPC from LPN |
Hamed Nemati | KTH Royal Institute of Technology | Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary |
Karl Palmskog | KTH Royal Institute of Technology | Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary |
Chris Peikert | University of Michigan, Ann Arbor | Practical Third-Generation FHE and Bootstrapping |
Dimitrios Skarlatos | Carnegie Mellon University | Scale-Out FHE LLMs on GPUs |
Vinod Vaikuntanathan | Massachusetts Institute of Technology | Can Quantum Computers (Really) Factor? |
Daniel Wichs | Northeastern University | Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing |
David Wu | University Of Texas At Austin | Fast Private Information Retrieval and More using Homomorphic Encryption |
Sustainability
Recipient | University | Research title |
Meeyoung Cha | Max Planck Institute | Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring |
Jingrui He | University of Illinois at Urbana–Champaign | Foundation Model Enabled Earth’s Ecosystem Monitoring |
Pedro Lopes | University of Chicago | AI-powered Tools that Enable Engineers to Make & Re-make Sustainable Hardware |
Cheng Yaw Low | Max Planck Institute | Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring |
Events & Conferences
Independent evaluations demonstrate Nova Premier’s safety
AI safety is a priority at Amazon. Our investment in safe, transparent, and responsible AI (RAI) includes collaboration with the global community and policymakers. We are members of and collaborate with organizations such as the Frontier Model Forum, the Partnership on AI, and other forums organized by government agencies such as the National Institute of Standards and Technology (NIST). Consistent with Amazon’s endorsement of the Korea Frontier AI Safety Commitments, we published our Frontier Model Safety Framework earlier this year.
During the development of the Nova Premier model, we conducted a comprehensive evaluation to assess its performance and safety. This included testing on both internal and public benchmarks and internal/automated and third-party red-teaming exercises. Once the final model was ready, we prioritized obtaining unbiased, third-party evaluations of the model’s robustness against RAI controls. In this post, we outline the key findings from these evaluations, demonstrating the strength of our testing approach and Amazon Premier’s standing as a safe model. Specifically, we cover our evaluations with two third-party evaluators: PRISM AI and ActiveFence.
Evaluation of Nova Premier against PRISM AI
PRISM Eval’s Behavior Elicitation Tool (BET) dynamically and systematically stress-tests AI models’ safety guardrails. The methodology focuses on measuring how many adversarial attempts (steps) it takes to get a model to generate harmful content across several key risk dimensions. The central metric is “steps to elicit” — the number of increasingly sophisticated prompting attempts required before a model generates an inappropriate response. A higher number of steps indicates stronger safety measures, as the model is more resistant to manipulation. The PRISM risk dimensions (inspired by the MLCommons AI Safety Benchmarks) include CBRNE weapons, violent crimes, non-violent crimes, defamation, and hate, amongst several others.
Using the BET Eval tool and its V1.0 metric, which is tailored toward non-reasoning models, we compared the recently released Nova models (Pro and Premier) to the latest models in the same class: Claude (3.5 v2 and 3.7 non-reasoning) and Llama4 Maverick, all available through Amazon Bedrock. PRISM BET conducts black-box evaluations (where model developers don’t have access to the test prompts) of models integrated with their API. The evaluation conducted with BET Eval MAX, PRISM’s most comprehensive/aggressive testing suite, revealed significant variations in safety against malicious instructions. Nova models demonstrated superior overall safety performance, with an average of 43 steps for Premier and 52 steps for Pro, compared to 37.7 for Claude 3.5 v2 and fewer than 12 steps for other models in the comparison set (namely, 9.9 for Claude3.7, 11.5 for Claude 3.7 thinking, and 6.5 for Maverick). This higher step count suggests that on average, Nova’s safety guardrails are more sophisticated and harder to circumvent through adversarial prompting. The figure below presents the number of steps per harm category evaluated through BET Eval MAX.
The PRISM evaluation provides valuable insights into the relative safety of different Amazon Bedrock models. Nova’s strong performance, particularly in hate speech and defamation resistance, represents meaningful progress in AI safety. However, the results also highlight the ongoing challenge of building truly robust safety measures into AI systems. As the field continues to evolve, frameworks like BET will play an increasingly important role in benchmarking and improving AI safety. As a part of this collaboration Nicolas Miailhe, CEO of PRISM Eval, said, “It’s incredibly rewarding for us to see Nova outperforming strong baselines using the BET Eval MAX; our aim is to build a long-term partnership toward safer-by-design models and to make BET available to various model providers.” Organizations deploying AI systems should carefully consider these safety metrics when selecting models for their applications.
Manual red teaming with ActiveFence
The AI safety & security company ActiveFence benchmarked Nova Premier on Bedrock on prompts distributed across Amazon’s eight core RAI categories. ActiveFence also evaluated Claude 3.7 (non-reasoning mode) and GPT 4.1 API on the same set. The flag rate on Nova Premier was lower than that on the other two models, indicating that Nova Premier is the safest of the three.
Model | 3P Flag Rate [↓ is better] |
Nova Premier | 12.0% |
Sonnet 3.7 (non-reasoning) | 20.6% |
GPT4.1 API | 22.4% |
“Our role is to think like an adversary but act in service of safety,” said Guy Paltieli from ActiveFence. “By conducting a blind stress test of Nova Premier under realistic threat scenarios, we helped evaluate its security posture in support of Amazon’s broader responsible-AI goals, ensuring the model could be deployed with greater confidence.”
These evaluations conducted with PRISM and ActiveFence give us confidence in the strength of our guardrails and our ability to protect our customers’ safety when they use our models. While these evaluations demonstrate strong safety performance, we recognize that AI safety is an ongoing challenge requiring continuous improvement. These assessments represent a point-in-time snapshot, and we remain committed to regular testing and enhancement of our safety measures. No AI system can guarantee perfect safety in all scenarios, which is why we maintain monitoring and response systems after deployment.
Acknowledgments: Vincent Ponzo, Elyssa Vincent
-
Funding & Business7 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers6 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions6 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers6 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Funding & Business7 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Tools & Platforms6 days ago
Winning with AI – A Playbook for Pest Control Business Leaders to Drive Growth
-
Jobs & Careers4 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit