Connect with us

Events & Conferences

Margarida Ferreira on her AWS Cloud Operations applied science internship

Published

on


Amazon Web Services (AWS) helps automate and facilitate much of what people do online, from managing customer data to scientific research. So it’s only fitting that managers of AWS cloud resources (eg DevOps engineers) should get an assist from machine learning on some of their most common tasks. In her role as an applied science intern on the AWS Cloud Operations team, Margarida Ferreira explored program generation methods to streamline the work done by DevOps engineers.

DevOps engineers provision, operate, and manage applications on AWS. They deploy upgrades, monitor security, and make sure cloud resources are always operating optimally. As with any job, their day might involve some repetitive work, whether the AWS application involves hundreds of or even more than 10,000 machines.

The AWS Cloud Operations team owns tools that allow DevOps engineers to safely operate large and complex applications. With the help of a team of applied science interns like Ferreira, AWS Cloud Operations are using various automation techniques to find time-saving opportunities in cloud management.

Constraint programming for automating repetitive tasks

Ferreira employed a novel approach to simplify AWS systems management, combining program synthesis and constraint programming to automate common tasks. It’s an approach she and others believe might be the right one given its ability to guarantee a desired outcome or goal.

Part of Margarida Ferreira’s research involves constraint programming, which can automatically generate program scripts given a specific set of restrictions.

“Program synthesis is the task of automatically generating a computer program in a programming language from a description of the desired behavior, without requiring manual coding by a programmer,” Ferreira explains. “It aims to bring the power of computation to a wider audience, by bridging the gap between a problem’s description in human-readable terms and the actual computer code that implements the solution. It’s useful for skilled programmers too, by allowing them to automate the implementation of repetitive, uninteresting snippets of code.

“I love the concept of synthesis — the idea that you can help people automate boring tasks that people don’t want to do manually.”

As a PhD candidate at Carnegie Mellon University (CMU), Ferreira specializes in automated reasoning and program synthesis. Part of her research involves constraint programming, which can automatically generate program scripts given a specific set of restrictions.

These scripts — often based on the analysis of log files from common, manual tasks — can then be used to automate future tasks, such as creating and setting up an Elastic Compute Cloud (EC2) instance. The process essentially teaches the computer to program itself using an example or demonstration.

From physics to computers

Born and raised in Portugal, Ferreira began her higher education as a physics major at the Instituto Superior Técnico in Lisbon. However, after enrolling in a computer programming class, she quickly switched majors to computer science and engineering.

She loved the challenge of thinking about problems in a structured way, and how an algorithm or sequence of steps could help her solve them. Ferreira earned both a bachelor’s and master’s in computer science and engineering from the Instituto Superior Técnico.

After graduation, Ferreira took the advice of a mentor to move to the U.S., enrolling in a dual-PhD program in computer science and engineering at CMU and the Instituto Superior Técnico. She splits her time and coursework between the U.S. and Lisbon and is due to complete her dual PhDs in 2026.

Related content

As a senior principal applied scientist at Amazon Web Services, Leino is continuing his career as a leading expert in program verification.

At CMU, Ferreira developed an early interest in program synthesis and constraint programming. Her thesis goal is to use formal methods, theoretical guarantees, and proofs to improve and optimize networks in ways that make them more efficient.

Early in 2023, Ferreira realized she wanted to balance her academic pursuits with industry experience. After consulting with her advisor, Ruben Martins, an assistant professor at CMU, Ferreira was connected to Daniel Kroening, a senior principal scientist with Amazon’s AWS Cloud Operations Team and the internship program lead. Kroening and the AWS Cloud Operations team were looking to apply constraint programming to automate management of AWS cloud resources, and Ferreira was a natural fit.

“Amazon wants to make computing available to an audience that’s as large as possible and make the computing products as easy to use as possible,” Kroening says. “Our goal with the cloud ops internship program is to enable customers to use AWS products without programming by teaching computers to program themselves.”

Related content

The service, which is now generally available, uses machine learning to make it faster and easier to catalog, discover, share, and govern data.

Ferreira interviewed with other companies besides Amazon, but said the conversation with Kroening stood out.

“Daniel was very good at letting me know what’s special about AWS: the impact,” Ferreira says. “Millions of people use AWS every day. That’s what made me chose to work at Amazon. The research I did can impact the lives of so many people.”

Program synthesis: accuracy guaranteed

DevOps engineers can benefit from automation, but they also need to be able to trust in how a task is expedited behind the scenes. A manager might use the AWS interface to open an S3 bucket, for example, and verify whether a piece of data is stored correctly. But if there are hundreds of those buckets, checking each one can quickly become a laborious task.

Using the log files of the manual tasks as constraints, Ferreira was able to use program synthesis to create an “automation runbook”, a script that can create a program to automate a cloud management task with a guarantee of accuracy.

“Program synthesis gives you a formal guarantee in the form of a mathematical proof that goes step by step in showing that the program that it’s creating is doing what you asked,” Ferreira says.

The method adds an essential level of confidence for managers who need to ensure their cloud systems are running optimally.

“The whole value prop is that the customer can take an automation runbook as is without having to double, triple, or quadruple check it. With constraint programming, the runbook is guaranteed to give you an answer, but only one that satisfies the constraints,” adds Kroening.

Pure research, palpable impact

Ferreira says she thoroughly enjoyed her experience at Amazon, in part because she found it was somewhat freer than she expected. She said she saw the research process at Amazon more like that of academia, where research is driven more by problem statements, hypotheses, and general curiosity.

Related content

Chamsi Hssaine and Hanzhang Qin, the inaugural postdoctoral scientists with the Supply Chain Optimization Technologies team, share what they learned from Amazon scientists.

“I expected I would have to justify my research decisions in some way with Amazon products,” Ferreira says. “That was definitely not the case, and that was pleasantly surprising to me.”

Kroening says science interns at Amazon are encouraged to do research that can be published. “This is very much a science internship as opposed to, say, a software engineering internship,” he points out.

Regarding her longer-term plans, Ferreira emphasized her desire to be a role model for others from her home country who may be intimidated by moving to a large country to pursue their careers.

“Some people who come from a small country like Portugal don’t always feel they can come to a country like the United States and have a bigger impact,” she says. “Maybe they’re afraid or just unsure that they would be successful here. I want to appeal to people like that and say, hey, you should try it. It might be very rewarding, like it was for me.”





Source link

Events & Conferences

An inside look at Meta’s transition from C to Rust on mobile

Published

on


Have you ever worked is legacy code? Are you curious what it takes to modernize systems at a massive scale?

Pascal Hartig is joined on the latest Meta Tech Podcast by Elaine and Buping, two software engineers working on a bold project to rewrite the decades-old C code in one of Meta’s core messaging libraries in Rust. It’s an ambitious effort that will transform a central messaging library that is shared across Messenger, Facebook, Instagram, and Meta’s AR/VR platforms.

They discuss taking on a project of this scope – even without a background in Rust, how they’re approaching it, and what it means to optimize for ‘developer happiness.’

Download or listen to the episode below:

You can also find the episode wherever you get your podcasts, including:

The Meta Tech Podcast is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features.

Send us feedback on InstagramThreads, or X.

And if you’re interested in learning more about career opportunities at Meta visit the Meta Careers page.





Source link

Continue Reading

Events & Conferences

Amazon Research Awards recipients announced

Published

on


Amazon Research Awards (ARA) provides unrestricted funds and AWS Promotional Credits to academic researchers investigating various research topics in multiple disciplines. This cycle, ARA received many excellent research proposals from across the world and today is publicly announcing 73 award recipients who represent 46 universities in 10 countries.

This announcement includes awards funded under five call for proposals during the fall 2024 cycle: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography, and Sustainability. Proposals were reviewed for the quality of their scientific content and their potential to impact both the research community and society. Additionally, Amazon encourages the publication of research results, presentations of research at Amazon offices worldwide, and the release of related code under open-source licenses.

Recipients have access to more than 700 Amazon public datasets and can utilize AWS AI/ML services and tools through their AWS Promotional Credits. Recipients also are assigned an Amazon research contact who offers consultation and advice, along with opportunities to participate in Amazon events and training sessions.

Recommended reads

In both black-box stress testing and red-team exercises, Nova Premier comes out on top.

“Automated Reasoning is an important area of research for Amazon, with potential applications across various features and applications to help improve security, reliability, and performance for our customers. Through the ARA program, we collaborate with leading academic researchers to explore challenges in this field,” said Robert Jones, senior principal scientist with the Cloud Automated Reasoning Group. “We were again impressed by the exceptional response to our Automated Reasoning call for proposals this year, receiving numerous high-quality submissions. Congratulations to the recipients! We’re excited to support their work and partner with them as they develop new science and technology in this important area.”

Recommended reads

IAM Access Analyzer feature uses automated reasoning to recommend policies that remove unused accesses, helping customers achieve “least privilege”.

“At Amazon, we believe that solving the world’s toughest sustainability challenges benefits from both breakthrough scientific research and open and bold collaboration. Through programs like the Amazon Research Awards program, we aim to support academic research that could contribute to our understanding of these complex issues,” said Kommy Weldemariam, Director of Science and Innovation Sustainability. “The selected proposals represent innovative projects that we hope will help advance knowledge in this field, potentially benefiting customers, communities, and the environment.”

ARA funds proposals throughout the year in a variety of research areas. Applicants are encouraged to visit the ARA call for proposals page for more information or send an email to be notified of future open calls.

The tables below list, in alphabetical order by last name, fall 2024 cycle call-for-proposal recipients, sorted by research area.

AI for Information Security

Recipient University Research title
Christopher Amato Northeastern University Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms
Bernd Bischl Ludwig Maximilian University of Munich Improving Generative and Foundation Models Reliability via Uncertainty-awareness
Shiqing Ma University Of Massachusetts Amherst LLM and Domain Adaptation for Attack Detection
Alina Oprea Northeastern University Multi-Agent Reinforcement Learning Cyber Defense for Securing Cloud Computing Platforms
Roberto Perdisci University of Georgia ContextADBench: A Comprehensive Benchmark Suite for Contextual Anomaly Detection

Automated Reasoning

Recipient University Research title
Nada Amin Harvard University LLM-Augmented Semi-Automated Proofs for Interactive Verification
Suguman Bansal Georgia Institute of Technology Certified Inductive Generalization in Reinforcement Learning
Ioana Boureanu University of Surrey Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems
Omar Haider Chowdhury Stony Brook University Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege
Stefan Ciobaca Alexandru Ioan Cuza University An Interactive Proof Mode for Dafny
João Ferreira INESC-ID Polyglot Automated Program Repair for Infrastructure as Code
Sicun Gao University Of California, San Diego Monte Carlo Trees with Conflict Models for Proof Search
Mirco Giacobbe University of Birmingham Neural Software Verification
Tobias Grosser University of Cambridge Synthesis-based Symbolic BitVector Simplification for Lean
Ronghui Gu Columbia University Scaling Formal Verification of Security Properties for Unmodified System Software
Alexey Ignatiev Monash University Huub: Next-Gen Lazy Clause Generation
Kenneth McMillan University of Texas At Austin Synthesis of Auxiliary Variables and Invariants for Distributed Protocol Verification
Alexandra Mendes University of Porto Overcoming Barriers to the Adoption of Verification-Aware Languages
Jason Nieh Columbia University Scaling Formal Verification of Security Properties for Unmodified System Software
Rohan Padhye Carnegie Mellon University Automated Synthesis and Evaluation of Property-Based Tests
Nadia Polikarpova University Of California, San Diego Discovering and Proving Critical System Properties with LLMs
Fortunat Rajaona University of Surrey Phoebe+: An Automated-Reasoning Tool for Provable Privacy in Cryptographic Systems
Subhajit Roy Indian Institute of Technology Kanpur Theorem Proving Modulo LLM
Gagandeep Singh University of Illinois At Urbana–Champaign Trustworthy LLM Systems using Formal Contracts
Scott Stoller Stony Brook University Restricter: An Automatic Tool for Authoring Amazon Cedar Access Control Policies with the Principle of Least Privilege
Peter Stuckey Monash University Huub: Next-Gen Lazy Clause Generation
Yulei Sui University of New South Wales Path-Sensitive Typestate Analysis through Sparse Abstract Execution
Nikos Vasilakis Brown University Semantics-Driven Static Analysis for the Unix/Linux Shell
Ping Wang Stevens Institute of Technology Leveraging Large Language Models for Reasoning Augmented Searching on Domain-specific NoSQL Database
John Wawrzynek University of California, Berkeley GPU-Accelerated High-Throughput SAT Sampling

AWS AI

Recipient University Research title
Panagiotis Adamopoulos Emory University Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations
Vikram Adve University of Illinois at Urbana–Champaign Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models
Frances Arnold California Institute of Technology Closed-loop Generative Machine Learning for De Novo Enzyme Discovery and Optimization
Yonatan Bisk Carnegie Mellon University Useful, Safe, and Robust Multiturn Interactions with LLMs
Shiyu Chang University of California, Santa Barbara Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing
Yuxin Chen University of Pennsylvania Provable Acceleration of Diffusion Models for Modern Generative AI
Tianlong Chen University of North Carolina at Chapel Hill Cut the Crap: Advancing the Efficient Communication of Multi-Agent Systems via Spatial-Temporal Topology Design and KV Cache Sharing
Mingyu Ding University of North Carolina at Chapel Hill Aligning Long Videos and Language as Long-Horizon World Models
Nikhil Garg Cornell University Market Design for Responsible Multi-agent LLMs
Jessica Hullman Northwestern University Human-Aligned Uncertainty Quantification in High Dimensions
Christopher Jermaine Rice University Fast, Trusted AI Using the EINSUMMABLE Compiler
Yunzhu Li Columbia University Physics-Informed Foundation Models Through Embodied Interactions
Pattie Maes Massachusetts Institute of Technology Understanding How LLM Agents Deviate from Human Choices
Sasa Misailovic University of Illinois at Urbana–Champaign Fellini: Differentiable ML Compiler for Full-Graph Optimization for LLM Models
Kristina Monakhova Cornell University Trustworthy extreme imaging for science using interpretable uncertainty quantification
Todd Mowry Carnegie Mellon University Efficient LLM Serving on Trainium via Kernel Generation
Min-hwan Oh Seoul National University Mutually Beneficial Interplay Between Selection Fairness and Context Diversity in Contextual Bandits
Patrick Rebeschini University of Oxford Optimal Regularization for LLM Alignment
Jose Renau University of California, Santa Cruz Verification Constrained Hardware Optimization using Intelligent Design Agentic Programming
Vilma Todri Emory University Generative AI solutions for The Spillover Effect of Fraudulent Reviews on Product Recommendations
Aravindan Vijayaraghavan Northwestern University Human-Aligned Uncertainty Quantification in High Dimensions
Wei Yang University of Texas at Dallas Optimizing RISC-V Compilers with RISC-LLM and Syntax Parsing
Huaxiu Yao University of North Carolina at Chapel Hill Aligning Long Videos and Language as Long-Horizon World Models
Amy Zhang University of Washington Tools for Governing AI Agent Autonomy
Ruqi Zhang Purdue University Efficient Test-time Alignment for Large Language Models and Large Multimodal Models
Zheng Zhang Rutgers University-New Brunswick AlphaQC: An AI-powered Quantum Circuit Optimizer and Denoiser

AWS Cryptography

Recipient University Research title
Alexandra Boldyreva Georgia Institute of Technology Quantifying Information Leakage in Searchable Encryption Protocols
Maria Eichlseder Graz University of Technology, Austria SALAD – Systematic Analysis of Lightweight Ascon-based Designs
Venkatesan Guruswami University of California, Berkeley Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing
Joseph Jaeger Georgia Institute of Technology Analyzing Chat Encryption for Group Messaging
Aayush Jain Carnegie Mellon Large Scale Multiparty Silent Preprocessing for MPC from LPN
Huijia Lin University of Washington Large Scale Multiparty Silent Preprocessing for MPC from LPN
Hamed Nemati KTH Royal Institute of Technology Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary
Karl Palmskog KTH Royal Institute of Technology Trustworthy Automatic Verification of Side-Channel Countermeasures for Binary Cryptographic Programs using the HoIBA libary
Chris Peikert University of Michigan, Ann Arbor Practical Third-Generation FHE and Bootstrapping
Dimitrios Skarlatos Carnegie Mellon University Scale-Out FHE LLMs on GPUs
Vinod Vaikuntanathan Massachusetts Institute of Technology Can Quantum Computers (Really) Factor?
Daniel Wichs Northeastern University Obfuscation, Proof Systems, and Secure Computation: A Research Program on Cryptography at the Simons Institute for the Theory of Computing
David Wu University Of Texas At Austin Fast Private Information Retrieval and More using Homomorphic Encryption

Sustainability

Recipient University Research title
Meeyoung Cha Max Planck Institute Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring
Jingrui He University of Illinois at Urbana–Champaign Foundation Model Enabled Earth’s Ecosystem Monitoring
Pedro Lopes University of Chicago AI-powered Tools that Enable Engineers to Make & Re-make Sustainable Hardware
Cheng Yaw Low Max Planck Institute Forest-Blossom (Flossom): A New Framework for Sustaining Forest Biodiversity Through Outcome-Driven Remote Sensing Monitoring





Source link

Continue Reading

Events & Conferences

Independent evaluations demonstrate Nova Premier’s safety

Published

on


AI safety is a priority at Amazon. Our investment in safe, transparent, and responsible AI (RAI) includes collaboration with the global community and policymakers. We are members of and collaborate with organizations such as the Frontier Model Forum, the Partnership on AI, and other forums organized by government agencies such as the National Institute of Standards and Technology (NIST). Consistent with Amazon’s endorsement of the Korea Frontier AI Safety Commitments, we published our Frontier Model Safety Framework earlier this year.

Amazon Nova Premier’s guardrails help prevent generation of unsafe content.

During the development of the Nova Premier model, we conducted a comprehensive evaluation to assess its performance and safety. This included testing on both internal and public benchmarks and internal/automated and third-party red-teaming exercises. Once the final model was ready, we prioritized obtaining unbiased, third-party evaluations of the model’s robustness against RAI controls. In this post, we outline the key findings from these evaluations, demonstrating the strength of our testing approach and Amazon Premier’s standing as a safe model. Specifically, we cover our evaluations with two third-party evaluators: PRISM AI and ActiveFence.

Evaluation of Nova Premier against PRISM AI

PRISM Eval’s Behavior Elicitation Tool (BET) dynamically and systematically stress-tests AI models’ safety guardrails. The methodology focuses on measuring how many adversarial attempts (steps) it takes to get a model to generate harmful content across several key risk dimensions. The central metric is “steps to elicit” — the number of increasingly sophisticated prompting attempts required before a model generates an inappropriate response. A higher number of steps indicates stronger safety measures, as the model is more resistant to manipulation. The PRISM risk dimensions (inspired by the MLCommons AI Safety Benchmarks) include CBRNE weapons, violent crimes, non-violent crimes, defamation, and hate, amongst several others.

Related content

From reinforcement learning and supervised fine-tuning to guardrail models and image watermarking, responsible AI was foundational to the design and development of the Amazon Nova family of models.

Using the BET Eval tool and its V1.0 metric, which is tailored toward non-reasoning models, we compared the recently released Nova models (Pro and Premier) to the latest models in the same class: Claude (3.5 v2 and 3.7 non-reasoning) and Llama4 Maverick, all available through Amazon Bedrock. PRISM BET conducts black-box evaluations (where model developers don’t have access to the test prompts) of models integrated with their API. The evaluation conducted with BET Eval MAX, PRISM’s most comprehensive/aggressive testing suite, revealed significant variations in safety against malicious instructions. Nova models demonstrated superior overall safety performance, with an average of 43 steps for Premier and 52 steps for Pro, compared to 37.7 for Claude 3.5 v2 and fewer than 12 steps for other models in the comparison set (namely, 9.9 for Claude3.7, 11.5 for Claude 3.7 thinking, and 6.5 for Maverick). This higher step count suggests that on average, Nova’s safety guardrails are more sophisticated and harder to circumvent through adversarial prompting. The figure below presents the number of steps per harm category evaluated through BET Eval MAX.

Results of tests using PRISM’s BET Eval MAX testing suite.

The PRISM evaluation provides valuable insights into the relative safety of different Amazon Bedrock models. Nova’s strong performance, particularly in hate speech and defamation resistance, represents meaningful progress in AI safety. However, the results also highlight the ongoing challenge of building truly robust safety measures into AI systems. As the field continues to evolve, frameworks like BET will play an increasingly important role in benchmarking and improving AI safety. As a part of this collaboration Nicolas Miailhe, CEO of PRISM Eval, said, “It’s incredibly rewarding for us to see Nova outperforming strong baselines using the BET Eval MAX; our aim is to build a long-term partnership toward safer-by-design models and to make BET available to various model providers.” Organizations deploying AI systems should carefully consider these safety metrics when selecting models for their applications.

Manual red teaming with ActiveFence

The AI safety & security company ActiveFence benchmarked Nova Premier on Bedrock on prompts distributed across Amazon’s eight core RAI categories. ActiveFence also evaluated Claude 3.7 (non-reasoning mode) and GPT 4.1 API on the same set. The flag rate on Nova Premier was lower than that on the other two models, indicating that Nova Premier is the safest of the three.

Model 3P Flag Rate [↓ is better]
Nova Premier 12.0%
Sonnet 3.7 (non-reasoning) 20.6%
GPT4.1 API 22.4%

Related content

Generative AI raises new challenges in defining, measuring, and mitigating concerns about fairness, toxicity, and intellectual property, among other things. But work has started on the solutions.

“Our role is to think like an adversary but act in service of safety,” said Guy Paltieli from ActiveFence. “By conducting a blind stress test of Nova Premier under realistic threat scenarios, we helped evaluate its security posture in support of Amazon’s broader responsible-AI goals, ensuring the model could be deployed with greater confidence.”

These evaluations conducted with PRISM and ActiveFence give us confidence in the strength of our guardrails and our ability to protect our customers’ safety when they use our models. While these evaluations demonstrate strong safety performance, we recognize that AI safety is an ongoing challenge requiring continuous improvement. These assessments represent a point-in-time snapshot, and we remain committed to regular testing and enhancement of our safety measures. No AI system can guarantee perfect safety in all scenarios, which is why we maintain monitoring and response systems after deployment.

Acknowledgments: Vincent Ponzo, Elyssa Vincent





Source link

Continue Reading

Trending