Events & Conferences
Pushing the boundaries of secure AI: Winners of the Amazon Nova AI Challenge

Since January 2025, ten elite university teams from around the world have taken part in the first-ever Amazon Nova AI Challenge, the Amazon Nova AI Challenge Trusted AI. Today, we’re proud to announce the winners and runners up of this global competition:
Defending Team Winner: Team PurpCorn-PLAN, University of Illinois Urbana-Champaign
Attacking Team Winner: Team PurCL, Purdue University
Defending Team Runner-Up: Team AlquistCoder, Czech Technical University in Prague
Attacking Team Runner-Up: Team RedTWIZ, Nova University Lisbon, Portugal
“We’ve worked on AI safety before, but this was the first time we had the chance to apply our ideas to a powerful, real-world model,” said Professor Gang Wang, faculty advisor to PurpCorn-PLAN (UIUC). “Most academic teams simply don’t have access to models of this caliber, let alone the infrastructure to test adversarial attacks and defenses at scale. This challenge didn’t just level the playing field, it gave our students a chance to shape the field.”
“For academic red teamers, testing against high-performing models is often out of reach,” said Professor Xiangyu Zhang, faculty advisor to PurCL (Purdue). “Open-weight models are helpful for prototyping, but they rarely reflect what’s actually deployed in production. Amazon gave us access to systems and settings that mirrored real-world stakes. That made the research and the win far more meaningful.”
These teams rose to the top after months of iterative development, culminating in a high-stakes, offline finals held in Santa Clara, California on June 26–27. There, the top four red teams and four model developer teams went head-to-head in a tournament designed to test the safety of AI coding models under adversarial conditions and the ingenuity of the researchers trying to break them.
A New Era of Adversarial Evaluation
The challenge tested a critical question facing the industry: Can we build AI coding assistants that are both helpful and secure?
Unlike static benchmarks, which tend to focus on isolated vulnerabilities, this tournament featured live, multi-turn conversations between attacker and defender bots. Red teams built automated “jailbreak” bots to trick AI into generating unsafe code. Defenders, starting from a custom 8B coding model, built by Amazon for the competition, applied reasoning-based guardrails, policy optimization, and vulnerability fixers to prevent misuse without breaking model utility.
Teams were evaluated using novel metrics that balanced security, diversity of attack, and functional code generation. Malicious responses were identified using a combination of static analysis tools (Amazon CodeGuru) and expert human annotation.
Prize Structure
Each of the 10 participating teams received $250,000 in sponsorship and AWS credits to support their work. The two winners – Team PurpCorn-PLAN (University of Illinois Urbana-Champaign) and Team PurCL (Purdue University) – each won an additional $250,000 split between each team’s members. The two runners-up – Team AlquistCoder (Czech Technical University in Prague) and Team RedTWIZ (Nova University Lisbon, Portugal) – were also awarded $100,000 each to split among their teams, bringing the total awards for the tournament to $700,000.
Highlights from the Challenge
Here are some of the most impactful advances uncovered during the challenge:
Multi-turn attack planning proved far more effective than single-turn jailbreaks
Winning red teams used progressive escalation, starting with benign prompts and gradually introducing malicious intent to bypass common guardrails. This insight reinforces the importance of addressing multi-turn adversarial conversations when evaluating AI security. Several teams developed planning and probing mechanisms that can hone in and identify weaknesses in a model defenses.
Reasoning-based safety alignment helped prevent vulnerabilities without degrading utility
Top model developer teams introduced deliberative reasoning, safety oracles, and GRPO-based policy optimization to teach AI assistants to reject unsafe prompts while still writing usable code. This shows it’s possible to build AI systems that are secure by design without sacrificing developer productivity, a key requirement for real-world adoption of AI coding tools.
Synthetic data generation was critical to scale training
Defender teams used a myriad of novel techniques to generate and refine training data using LLMs, while red teams developed new ways to mutated benign examples into adversarial ones and used LLMs to synthesize multi-turn adversarial data. These approaches offer a path to complimenting human red teaming with automated, low-cost methods for continuously improving model safety at industrial scale.
Novel evaluation methods exposed real tradeoffs in security vs. functionality
To prevent gaming the system, defender models were penalized for over-refusal or excessive blocking, encouraging teams to build nuanced, robust safety systems. Industry-grade AI must balance refusing dangerous prompts while still being helpful. These evaluation strategies surface the real-world tensions between safety and usability and offer ways to resolve them.
“I’ve been inspired by the creativity and technical excellence these students brought to the challenge,” said Eric Docktor, Chief Information Security Officer, Specialized Businesses, Amazon. “Each of the teams brought fresh perspectives to complex problems that will help accelerate the field of secure, trustworthy AI-assisted software development and advance how we secure AI systems at Amazon. What makes this tournament format particularly valuable is seeing how security concepts hold up under real adversarial pressure, which is essential for building secure, trustworthy AI coding systems that developers can rely on.”
“I’m particularly excited about how this tournament approach helped us understand AI safety in a deeply practical way. What’s especially encouraging is that we discovered we don’t have to choose between safety and utility and the participants showed us innovative ways to achieve both,” said Rohit Prasad, SVP of Amazon AGI. “Their creative strategies for both protecting and probing these systems will directly inform how we build more secure and reliable AI models. I believe this kind of adversarial evaluation will become essential as we work to develop foundation models that customers can trust with their most important tasks.”
What’s Next
Today, the finalists reunited at the Amazon Nova AI Summit in Seattle to present their findings, discuss emerging risks in AI-assisted coding, and explore how adversarial testing can be applied to other domains of Responsible AI, from healthcare to misinformation.
We’re proud to celebrate the incredible work of all participating teams. Their innovations are not just academic. They’re laying the foundation for a safer AI future.
Events & Conferences
A better path to pruning large language models

In recent years, large language models (LLMs) have revolutionized the field of natural-language processing and made significant contributions to computer vision, speech recognition, and language translation. One of the keys to LLMs’ effectiveness has been the exceedingly large datasets they’re trained on. The trade-off is exceedingly large model sizes, which lead to slower runtimes and higher consumption of computational resources. AI researchers know these challenges well, and many of us are seeking ways to make large models more compact while maintaining their performance.
To this end, we’d like to present a novel philosophy, “Prune Gently, Taste Often”, which focuses on a new way to do pruning, a compression process that removes unimportant connections within the layers of an LLM’s neural network. In a paper we presented at this year’s meeting of the Association for Computational Linguistics (ACL), we describe our framework, Wanda++, which can compress a model with seven billion parameters in under 10 minutes on a single GPU.
Measured according to perplexity, or how well a probability distribution predicts a given sample, our approach improves the model’s performance by 32 percent over its leading predecessor, called Wanda.
A brief history of pruning
Pruning is challenging for a number of reasons. First, training huge LLMs is expensive, and once they’re trained, runtime is expensive too. While pruning can make runtime cheaper, if it’s done later in the build process, it hurts performance. But if it’s done too early in the build process, it further exacerbates the first problem: increasing the cost of training.
When a model is trained, it builds a map of semantic connections gleaned from the training data. These connections, called parameters, gain or lose importance, or weight, as more training data is introduced. Pruning during the training stage, called “pruning-aware training,” is baked into the training recipe and performs model-wide scans of weights, at a high computational cost. What’s worse, pruning-aware training comes with a heavy trial burden of full-scale runs. Researchers must decide when to prune, how often, and what criteria to use to keep pretraining performance viable. Tuning such “hyperparameters” requires repeated model-wide culling experiments, further driving up cost
The other approach to pruning is to do it after the LLM is trained. This tends to be cheaper, taking somewhere between a few minutes and a few hours — compared to the weeks that training can take. And post-training pruning doesn’t require a large number of GPUs.
In this approach, engineers scan the model layer by layer for unimportant weights, as measured by a combination of factors such as how big the weight is and how frequently it factors into the model’s final output. If either number is low, the weight is more likely to be pruned. The problem with this approach is that it isn’t “gentle”: it shocks the structure of the model, which loses accuracy since it doesn’t learn anything from the absence of those weights, as it would have if they had been removed during training.
Striking a balance
Here’s where our philosophy presents a third path. After a model is fully trained, we scan it piece by piece, analyzing weights neither at the whole-model level nor at the layer level but at the level of decoding blocks: smaller, repeating building blocks that make up most of an LLM.
Within each decoding block, we feed in a small amount of data and collect the output to calibrate the weights, pruning the unimportant ones and updating the surviving ones for a few iterations. Since decoder blocks are small — a fraction of the size of the entire model — this approach requires only a single GPU, which can scan a block within minutes.
We liken our approach to the way an expert chef spices a complex dish. In cooking, spices are easy to overlook and hard to add at the right moment — and even risky, if handled poorly. One simply cannot add a heap of tarragon, pepper, and salt at the beginning (pruning-aware training) or at the end (layer-wide pruning) and expect to have the same results as if spices had been added carefully throughout. Similarly, our approach finds a balance between two extremes. Pruning block by block, as we do, is more like spicing a dish throughout the process. Hence the motto of our approach: Prune Gently, Taste Often.
From a technical perspective, the key is focusing on decoding blocks, which are composed of a few neural-network layers such as attention layers, multihead attention layers, and multilayer perceptrons. Even an LLM with seven billion parameters might have just 32 decoder blocks. Each block is small enough — say, 200 million parameters — to easily be scanned by a single GPU. Pruning a model at the block level saves resources by not consuming much GPU memory.
And while all pruning processes initially diminish performance, ours actually brings it back. Every time we scan a block, we balanceg pruning with performance until they’re optimized. Then we move on to the next block. This preserves both performance at the block level and overall model quality. With Wanda++, we’re offering a practical, scalable middle path for the LLM optimization process, especially for teams that don’t control the full training pipeline or budget.
What’s more, we believe our philosophy also helps address a pain point of LLM development at large companies. Before the era of LLMs, each team built its own models, with the services that a single LLM now provides achieved via orchestration of those models. Since none of the models was huge, each model development team received its own allocation of GPUs. Nowadays, however, computational resources tend to get soaked up by the teams actually training LLMs. With our philosophy, teams working on runtime performance optimization, for instance, could reclaim more GPUs, effectively expanding what they can explore.
Further implementations of Prune Gently, Taste Often could apply to other architectural optimizations. For instance, calibrating a model at the decoder-block level could convert a neural network with a dense structure, called a dense multilayer perceptron, to a less computationally intensive neural network known as a mixture of experts (MoE). In essence, per-decoder-block calibration can enable a surgical redesign of the model by replacing generic components with more efficient and better-performing alternatives such as Kolmogorov-Arnold Networks (KAN). While the Wanda++ philosophy isn’t a cure-all, we believe it opens up an exciting new path for re-thinking model compression and exploring future LLM architectures.
Events & Conferences
Three challenges in machine-based reasoning

Generative AI has made the past few years the most exhilarating time in my 30+-year career in the space of mechanized reasoning. Why? Because the computer industry and even the general public are now eager to talk about ideas that those of us working in logic have been passionate about for years. The challenges of language, syntax, semantics, validity, soundness, completeness, computational complexity, and even undecidability were previously too academic and obscure to be relevant to the masses. But all of that has changed. To those of you who are now discovering these topics: welcome! Step right in, we’re eager to work with you.
I thought it would be useful to share what I believe are the three most vexing aspects of making correct reasoning work in AI systems, e.g., generative-AI-based systems such as chatbots. The upcoming launch of the Automated-Reasoning-checks capability in Bedrock Guardrails was in fact motivated by these challenges. But we are far from done: due to the inherent difficulty of these problems, we as a community (and we on the Automated-Reasoning-checks team) will be working on these challenges for years to come.
Difficulty #1: Translating from natural to structured language
Humans usually communicate with imprecise and ambiguous language. Often, we are able to infer disambiguating detail from context. In some cases, when it really matters, we will try to clarify with each other (“did you mean to say… ?”). In other cases, even when we really should, we won’t.
This is often a source of confusion and conflict. Imagine that an employer defines eligibility for an employee HR benefit as “having a contract of employment of 0.2 full-time equivalent (FTE) or greater”. Suppose I tell you that I “spend 20% of my time at work, except when I took time off last year to help a family member recover from surgery”. Am I eligible for the benefit? When I said I “spend 20% of my time at work”, does that mean I am spending 20% of my working time, under the terms of a contract?
My statement has multiple reasonable interpretations, each with different outcomes for benefit eligibility. Something we do in Automated Reasoning checks is make multiple attempts to translate between the natural language and query predicates, using complementary approaches. This is a common interview technique: ask for the same information in different ways, and see if the facts stay consistent. In Automated Reasoning checks, we use solvers for formal logic systems to prove/disprove the equivalence of the different interpretations. If the translations differ at the semantic level, the application that uses Automated Reasoning checks can then ask for clarifications (e.g. “Can you confirm that you have a contract of employment for 20% of full time or greater?”).
Difficulty #2: Defining truth
Something that never fails to amaze me is how difficult it is for groups of people to agree on the meanings of rules. Complex rules and laws often have subtle contradictions that can go unnoticed until someone tries to reach consensus on their interpretation. The United Kingdom’s Copyrights, Designs, and Patents Act of 1988, for example, contains an inherent contradiction: it defines copyrightable works as those stemming from an author’s original intellectual creation, while simultaneously offering protection to works that require no creative human input — an incoherence that is particularly glaring in this age of AI-generated works.
The second source of trouble is that we seem to always be changing our rules. The US federal government’s per-diem rates, for example, change annually, requiring constant maintenance of any system that depends on those values.
Finally, few people actually deeply understand all of the corner cases of the rules that they are supposed to abide by. Consider the question of wearing earphones while driving: In some US states (e.g., Alaska) it’s illegal; in some states (e.g., Florida) it’s legal to wear one earphone only; while in other states (e.g., Texas), it’s actually legal. In an informal poll, very few of my friends and colleagues were confident in their understanding of the legality of wearing headphones while driving in the place where they most recently drove a car.
Automated Reasoning checks address these challenges by helping customers define what the truth should be in their domains of interest — be they tax codes, HR policies, or other rule systems — and by providing mechanisms for refining those definitions over time, as the rules change. As generative-AI-based (GenAI-based) chatbots emerged, something that captured the imagination of many of us is the idea that complex rule systems could be made accessible to the general public through natural-language queries. Chatbots could in the future give direct and easy-to-understand answers to questions like “Can I make a U-turn when driving in Tokyo, Japan?”, and by addressing the challenge of defining truth, Automated Reasoning checks can help ensure that the answer is reliable.
Difficulty #3: definitive reasoning
Imagine we have a set of rules (let’s call it R) and a statement (S) we want to verify. For example, R might be Singapore’s driving code, and S might be a question about U-turns at intersections in Singapore. We can encode R and S into Boolean logic, which computers understand, by combining Boolean variables in various ways.
Let’s say that encoding R and S needs just 500 bits — about 63 characters. This is a tiny amount of information! But even when our encoding of the rule system is small enough to fit in a text message, the number of scenarios we’d need to check is astronomical. In principle, we must consider all 2500 possible combinations before we can authoritatively declare S to be a true statement. A powerful computer today can perform hundreds of millions of operations in the time it takes you to blink. But even if we had all the computers in the world running at this blazing speed since the beginning of time, we still wouldn’t be close to checking all 2500 possibilities today.
Thankfully, the automated-reasoning community has developed a class of sophisticated tools, called SAT solvers, that make this type of combinatorial checking possible and remarkably fast in many (but not all) cases. Automated Reasoning checks make use of these tools when checking the validity of statements.
Unfortunately, not all problems can be encoded in a way that plays to the strengths of SAT solvers. For example, imagine a rule system has the provision “if every even number greater than 2 is the sum of two prime numbers, then the tax withholding rate is 30%; otherwise it’s 40%”. The problem is that to know the tax withholding rate, you need to know whether every even number greater than 2 is the sum of two prime numbers, and no one currently knows whether this is true. This statement is called Goldbach’s conjecture and has been an open problem since 1742. Still, while we don’t know the answer to Goldbach’s conjecture, we do know that it is either true or false, so we can definitively say that the tax withholding rate must be either 30% or 40%.
It’s also fun to think about whether it’s possible for a customer of Automated Reasoning checks to define a policy that is contingent on the output of Automated Reasoning checks. For instance, could the policy encode the rule “access is allowed if and only if Automated Reasoning checks say it is not allowed”? Here, no correct answer is possible, because the rule has created a contradiction by referring recursively to its own checking procedure. The best we can possibly do is answer “Unknown” (which is, in fact, what Automated Reasoning checks will answer in this instance).
The fact that a tool such as Automated Reasoning checks can return neither “true” nor “false” to statements like this was first identified by Kurt Gödel in 1931. What we know from Gödel’s result is that systems like Automated Reasoning checks can’t be both consistent and complete, so they must choose one. We have chosen to be consistent.
These three difficulties — translating natural language into structured logic, defining truth in the context of ever changing and sometimes contradictory rules, and tackling the complexity of definitive reasoning — are more than mere technical hurdles we face when we try to build AI systems with sound reasoning. They are problems that are deeply rooted in both the limitations of our technology and the intricacies of human systems.
With the forthcoming launch of Automated Reasoning checks in Bedrock Guardrails, we are tackling these challenges through a combination of complementary approaches: applying cross-checking methods to translate from ambiguous natural language to logical predicates, providing flexible frameworks to help customers develop and maintain rule systems, and employing sophisticated SAT solvers while carefully handling cases where definitive answers are not possible. As we work to improve the performance of the product on these challenges, we are not only advancing technology but also deepening our understanding of the fundamental questions that have shaped reasoning itself, from Gödel’s incompleteness theorem to the evolving nature of legal and policy frameworks.
Given our commitment to providing sound reasoning, the road ahead in the AI space is challenging. Challenge accepted!
Events & Conferences
Building a human-computer interface for everyone

What if you could control any device using only subtle hand movements?
New research from Meta’s Reality Labs is pointing even more firmly toward wrist-worn devices using surface electromyography (sEMG) becoming the future of human-computer interaction.
But how do you develop a wrist-worn input device that works for everyone?
Generalization has been one of the most significant challenges in the field of human-computer interaction (HCI). The machine learning models that power a device can be trained to respond to an individual’s hand gestures, but they struggle to apply that same learning to someone else. Essentially, novel HCI devices are usually one-size-fits-one.
On the latest episode of the Meta Tech Podcast, Pascal Hartig sits down with Sean B., Lauren G., and Jesse M. — research scientists on Meta’s EMG engineering and research team — to discuss how their team is tackling the challenge of generalization and reimagining how we interact with technology.
They discuss the road to creating a first-of-its-kind, generic human-computer neuromotor interface, what happens when software and hardware engineering meet neuroscience, and more!
Download or listen to the episode below:
You can also find the episode wherever you get your podcasts, including:
The Meta Tech Podcast is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features.
Send us feedback on Instagram, Threads, or X.
And if you’re interested in learning more about career opportunities at Meta visit the Meta Careers page.
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Funding & Business1 month ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 month ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education1 month ago
VEX Robotics launches AI-powered classroom robotics system
-
Education1 month ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Mergers & Acquisitions1 month ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Jobs & Careers1 month ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Podcasts & Talks1 month ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks1 month ago
OpenAI 🤝 @teamganassi
-
Jobs & Careers1 month ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure