Connect with us

Events & Conferences

Accelerating on-device ML on Meta’s family of apps with ExecuTorch

Published

on


  • ExecuTorch is the PyTorch inference framework for edge devices developed by Meta with support from industry leaders like Arm, Apple, and Qualcomm. 
  • Running machine learning (ML) models on-device is increasingly important for Meta’s family of apps (FoA). These on-device models improve latency, maintain user privacy by keeping data on users’ devices, and enable offline functionality.
  • We’re showcasing some of the on-device AI features, powered by ExecuTorch, that are serving billions of people on Instagram, WhatsApp, Messenger, and Facebook.
  • These rollouts have significantly improved the performance and efficiency of on-device ML models in Meta’s FoA and eased the research to production path.

Over the past year, we’ve rolled out ExecuTorch, an open-source solution for on-device inference on mobile and edge devices, across our family of apps (FoA) and seen significant improvements in model performance, privacy enhancement, and latency over our previous on-device machine learning (ML) stack.

ExecuTorch was built in collaboration with industry leaders and uses PyTorch 2.x technologies to convert models into a stable and compact representation for efficient on-device deployment. Its compact runtime, modularity, and extensibility make it easy for developers to choose and customize components – ensuring portability across platforms, compatibility with PyTorch, and high performance.

Adopting ExecuTorch has helped us enhance our user experiences in our products and services used by billions of people all over the world.

The following are just a few examples of the various ML models on our apps on Android and iOS devices that ExecuTorch supports.

Enabling Cutouts on Instagram

Cutouts is one of Instagram’s latest features for creative expression and storytelling. It lets people transform photos and videos of their favorite moments into animated, personalized stickers that they can share via Reels or Stories. We migrated the Cutouts feature in Instagram to run with ExecuTorch by enabling SqueezeSAM, a lightweight version of the Meta Segment Anything Model (SAM). For both Android and iOS, ExecuTorch was significantly faster compared to the older stack, translating into increases in Cutouts’ daily active users (DAU). 

ExecuTorch enables Instagram’s Cutouts feature to run faster and more efficiently for both on-device sticker generation (left) and creating overlays on a photo. (right)

Improving video and call quality on WhatsApp

WhatsApp needs to be usable and reliable regardless of your network connection bandwidth. To achieve this, we developed bandwidth estimation models, tailored for various platforms. These models help detect and utilize available network bandwidth, optimizing video streaming quality without compromising the smoothness of video calls.  

These models need to be highly accurate and run as efficiently as possible. By leveraging ExecuTorch, we have observed improvements for the bandwidth estimation models in performance, reliability, and efficiency metrics. Specifically, we reduced the model load time and average inference time substantially while reducing app not responsive (ANR) metrics. Along the way, we further strengthened  security guarantees compared to the older PyTorch mobile framework by adding fuzzing tests, which involve supplying invalid or random inputs to a program and monitoring for exceptions. With the positive signal from these releases, we are now migrating several other key WhatsApp models, such as ones for on-device noise-canceling and video enhancement, to ExecuTorch as well. 

Here, Messenger’s language identification model (Lid) restricts the prompt language to English for Meta AI’s Imagine feature.

Shipping on-device ML for end-to-end encryption on Messenger

End-to-end encryption (E2EE) on Messenger ensures that no one except you and the people you’re talking to can see your messages, not even Meta. ExecuTorch has enabled E2EE on Messenger by moving server side models to run on-device, allowing data transfers to remain encrypted.

To enable E2EE, we migrated and deployed several models, including an on-device language identification (LID) model on Messenger. LID is a Messenger model that detects the language of given text and enables various downstream tasks, including translation, message summarization, and personalized content recommendations. With ExecuTorch, on-device LID is significantly faster and conserves server and network capacity. 

To preserve Messenger’s E2EE environment, we have also leveraged ExecuTorch to move other Messenger models on-device, including one for optimizing video calling quality (similar to WhatsApp’s bandwidth estimation models) and another for image cutouts (similar to Cutouts on Instagram). These shifts resulted in improved infrastructure efficiency by freeing up capacity and enabling us to scale these features globally. 

Background music recommendations for Facebook

Facebook employs a core AI model called SceneX that performs a variety of tasks, including image recognition/categorization, captioning, creating AI-generated backgrounds for images, and image safety checks. Shifting SceneX to ExecuTorch now allows it to enhance people’s Facebook Stories by suggesting background music based on images.

With the ExecuTorch rollout, we saw performance improvements in SceneX across the board from low- to high-end devices compared to the older stack. Several other models, including which enhance image quality and perform background noise reduction during calls, are now in various stages of A/B testing. 

Building the future of on-device AI with the ExecuTorch Community

We hope the results we’ve seen leveraging ExecuTorch to help solve some of Meta’s on-device ML challenges at scale will be encouraging to the rest of the industry. We invite you to contribute to ExecuTorch and share feedback on our GitHub page. You can also join our growing community on the ExecuTorch Discord server.

We look forward to driving more innovation in on-device ML and shaping the future of on-device AI together with the community.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Events & Conferences

A better path to pruning large language models

Published

on


In recent years, large language models (LLMs) have revolutionized the field of natural-language processing and made significant contributions to computer vision, speech recognition, and language translation. One of the keys to LLMs’ effectiveness has been the exceedingly large datasets they’re trained on. The trade-off is exceedingly large model sizes, which lead to slower runtimes and higher consumption of computational resources. AI researchers know these challenges well, and many of us are seeking ways to make large models more compact while maintaining their performance.

To this end, we’d like to present a novel philosophy, “Prune Gently, Taste Often”, which focuses on a new way to do pruning, a compression process that removes unimportant connections within the layers of an LLM’s neural network. In a paper we presented at this year’s meeting of the Association for Computational Linguistics (ACL), we describe our framework, Wanda++, which can compress a model with seven billion parameters in under 10 minutes on a single GPU.

Measured according to perplexity, or how well a probability distribution predicts a given sample, our approach improves the model’s performance by 32 percent over its leading predecessor, called Wanda.

A brief history of pruning

Pruning is challenging for a number of reasons. First, training huge LLMs is expensive, and once they’re trained, runtime is expensive too. While pruning can make runtime cheaper, if it’s done later in the build process, it hurts performance. But if it’s done too early in the build process, it further exacerbates the first problem: increasing the cost of training.

When a model is trained, it builds a map of semantic connections gleaned from the training data. These connections, called parameters, gain or lose importance, or weight, as more training data is introduced. Pruning during the training stage, called “pruning-aware training,” is baked into the training recipe and performs model-wide scans of weights, at a high computational cost. What’s worse, pruning-aware training comes with a heavy trial burden of full-scale runs. Researchers must decide when to prune, how often, and what criteria to use to keep pretraining performance viable. Tuning such “hyperparameters” requires repeated model-wide culling experiments, further driving up cost

The other approach to pruning is to do it after the LLM is trained. This tends to be cheaper, taking somewhere between a few minutes and a few hours — compared to the weeks that training can take. And post-training pruning doesn’t require a large number of GPUs.

In this approach, engineers scan the model layer by layer for unimportant weights, as measured by a combination of factors such as how big the weight is and how frequently it factors into the model’s final output. If either number is low, the weight is more likely to be pruned. The problem with this approach is that it isn’t “gentle”: it shocks the structure of the model, which loses accuracy since it doesn’t learn anything from the absence of those weights, as it would have if they had been removed during training.

Striking a balance

Here’s where our philosophy presents a third path. After a model is fully trained, we scan it piece by piece, analyzing weights neither at the whole-model level nor at the layer level but at the level of decoding blocks: smaller, repeating building blocks that make up most of an LLM.

Within each decoding block, we feed in a small amount of data and collect the output to calibrate the weights, pruning the unimportant ones and updating the surviving ones for a few iterations. Since decoder blocks are small — a fraction of the size of the entire model — this approach requires only a single GPU, which can scan a block within minutes.

We liken our approach to the way an expert chef spices a complex dish. In cooking, spices are easy to overlook and hard to add at the right moment — and even risky, if handled poorly. One simply cannot add a heap of tarragon, pepper, and salt at the beginning (pruning-aware training) or at the end (layer-wide pruning) and expect to have the same results as if spices had been added carefully throughout. Similarly, our approach finds a balance between two extremes. Pruning block by block, as we do, is more like spicing a dish throughout the process. Hence the motto of our approach: Prune Gently, Taste Often.

From a technical perspective, the key is focusing on decoding blocks, which are composed of a few neural-network layers such as attention layers, multihead attention layers, and multilayer perceptrons. Even an LLM with seven billion parameters might have just 32 decoder blocks. Each block is small enough — say, 200 million parameters — to easily be scanned by a single GPU. Pruning a model at the block level saves resources by not consuming much GPU memory.
And while all pruning processes initially diminish performance, ours actually brings it back. Every time we scan a block, we balanceg pruning with performance until they’re optimized. Then we move on to the next block. This preserves both performance at the block level and overall model quality. With Wanda++, we’re offering a practical, scalable middle path for the LLM optimization process, especially for teams that don’t control the full training pipeline or budget.

Pruning at the level of the decoder block is “gentle” because the effects of the pruning are localized; they exert less influence on the overall behavior of the model. Repeating the pruning process for each block is like the practice of a chef who “tastes often” to ensure that the spices in the meal under preparation remain in balance.

What’s more, we believe our philosophy also helps address a pain point of LLM development at large companies. Before the era of LLMs, each team built its own models, with the services that a single LLM now provides achieved via orchestration of those models. Since none of the models was huge, each model development team received its own allocation of GPUs. Nowadays, however, computational resources tend to get soaked up by the teams actually training LLMs. With our philosophy, teams working on runtime performance optimization, for instance, could reclaim more GPUs, effectively expanding what they can explore.

Further implementations of Prune Gently, Taste Often could apply to other architectural optimizations. For instance, calibrating a model at the decoder-block level could convert a neural network with a dense structure, called a dense multilayer perceptron, to a less computationally intensive neural network known as a mixture of experts (MoE). In essence, per-decoder-block calibration can enable a surgical redesign of the model by replacing generic components with more efficient and better-performing alternatives such as Kolmogorov-Arnold Networks (KAN). While the Wanda++ philosophy isn’t a cure-all, we believe it opens up an exciting new path for re-thinking model compression and exploring future LLM architectures.





Source link

Continue Reading

Events & Conferences

Three challenges in machine-based reasoning

Published

on


Generative AI has made the past few years the most exhilarating time in my 30+-year career in the space of mechanized reasoning. Why? Because the computer industry and even the general public are now eager to talk about ideas that those of us working in logic have been passionate about for years. The challenges of language, syntax, semantics, validity, soundness, completeness, computational complexity, and even undecidability were previously too academic and obscure to be relevant to the masses. But all of that has changed. To those of you who are now discovering these topics: welcome! Step right in, we’re eager to work with you.

I thought it would be useful to share what I believe are the three most vexing aspects of making correct reasoning work in AI systems, e.g., generative-AI-based systems such as chatbots. The upcoming launch of the Automated-Reasoning-checks capability in Bedrock Guardrails was in fact motivated by these challenges. But we are far from done: due to the inherent difficulty of these problems, we as a community (and we on the Automated-Reasoning-checks team) will be working on these challenges for years to come.

Difficulty #1: Translating from natural to structured language

Humans usually communicate with imprecise and ambiguous language. Often, we are able to infer disambiguating detail from context. In some cases, when it really matters, we will try to clarify with each other (“did you mean to say… ?”). In other cases, even when we really should, we won’t.

This is often a source of confusion and conflict. Imagine that an employer defines eligibility for an employee HR benefit as “having a contract of employment of 0.2 full-time equivalent (FTE) or greater”. Suppose I tell you that I “spend 20% of my time at work, except when I took time off last year to help a family member recover from surgery”. Am I eligible for the benefit? When I said I “spend 20% of my time at work”, does that mean I am spending 20% of my working time, under the terms of a contract?

My statement has multiple reasonable interpretations, each with different outcomes for benefit eligibility. Something we do in Automated Reasoning checks is make multiple attempts to translate between the natural language and query predicates, using complementary approaches. This is a common interview technique: ask for the same information in different ways, and see if the facts stay consistent. In Automated Reasoning checks, we use solvers for formal logic systems to prove/disprove the equivalence of the different interpretations. If the translations differ at the semantic level, the application that uses Automated Reasoning checks can then ask for clarifications (e.g. “Can you confirm that you have a contract of employment for 20% of full time or greater?”).

Automated Reasoning checks use large language models to generate several possible translations of natural language into a formal language. Automated Reasoning checks flag discrepancies between the translations, which customers can resolve through natural-language interactions.

Difficulty #2: Defining truth

Something that never fails to amaze me is how difficult it is for groups of people to agree on the meanings of rules. Complex rules and laws often have subtle contradictions that can go unnoticed until someone tries to reach consensus on their interpretation. The United Kingdom’s Copyrights, Designs, and Patents Act of 1988, for example, contains an inherent contradiction: it defines copyrightable works as those stemming from an author’s original intellectual creation, while simultaneously offering protection to works that require no creative human input — an incoherence that is particularly glaring in this age of AI-generated works.

The second source of trouble is that we seem to always be changing our rules. The US federal government’s per-diem rates, for example, change annually, requiring constant maintenance of any system that depends on those values.

Finally, few people actually deeply understand all of the corner cases of the rules that they are supposed to abide by. Consider the question of wearing earphones while driving: In some US states (e.g., Alaska) it’s illegal; in some states (e.g., Florida) it’s legal to wear one earphone only; while in other states (e.g., Texas), it’s actually legal. In an informal poll, very few of my friends and colleagues were confident in their understanding of the legality of wearing headphones while driving in the place where they most recently drove a car.

Automated Reasoning checks address these challenges by helping customers define what the truth should be in their domains of interest — be they tax codes, HR policies, or other rule systems — and by providing mechanisms for refining those definitions over time, as the rules change. As generative-AI-based (GenAI-based) chatbots emerged, something that captured the imagination of many of us is the idea that complex rule systems could be made accessible to the general public through natural-language queries. Chatbots could in the future give direct and easy-to-understand answers to questions like “Can I make a U-turn when driving in Tokyo, Japan?”, and by addressing the challenge of defining truth, Automated Reasoning checks can help ensure that the answer is reliable.

The user interface for Automated Reasoning checks.

Difficulty #3: definitive reasoning

Imagine we have a set of rules (let’s call it R) and a statement (S) we want to verify. For example, R might be Singapore’s driving code, and S might be a question about U-turns at intersections in Singapore. We can encode R and S into Boolean logic, which computers understand, by combining Boolean variables in various ways.

Let’s say that encoding R and S needs just 500 bits — about 63 characters. This is a tiny amount of information! But even when our encoding of the rule system is small enough to fit in a text message, the number of scenarios we’d need to check is astronomical. In principle, we must consider all 2500 possible combinations before we can authoritatively declare S to be a true statement. A powerful computer today can perform hundreds of millions of operations in the time it takes you to blink. But even if we had all the computers in the world running at this blazing speed since the beginning of time, we still wouldn’t be close to checking all 2500 possibilities today.

Thankfully, the automated-reasoning community has developed a class of sophisticated tools, called SAT solvers, that make this type of combinatorial checking possible and remarkably fast in many (but not all) cases. Automated Reasoning checks make use of these tools when checking the validity of statements.

Unfortunately, not all problems can be encoded in a way that plays to the strengths of SAT solvers. For example, imagine a rule system has the provision “if every even number greater than 2 is the sum of two prime numbers, then the tax withholding rate is 30%; otherwise it’s 40%”. The problem is that to know the tax withholding rate, you need to know whether every even number greater than 2 is the sum of two prime numbers, and no one currently knows whether this is true. This statement is called Goldbach’s conjecture and has been an open problem since 1742. Still, while we don’t know the answer to Goldbach’s conjecture, we do know that it is either true or false, so we can definitively say that the tax withholding rate must be either 30% or 40%.

It’s also fun to think about whether it’s possible for a customer of Automated Reasoning checks to define a policy that is contingent on the output of Automated Reasoning checks. For instance, could the policy encode the rule “access is allowed if and only if Automated Reasoning checks say it is not allowed”? Here, no correct answer is possible, because the rule has created a contradiction by referring recursively to its own checking procedure. The best we can possibly do is answer “Unknown” (which is, in fact, what Automated Reasoning checks will answer in this instance).

The fact that a tool such as Automated Reasoning checks can return neither “true” nor “false” to statements like this was first identified by Kurt Gödel in 1931. What we know from Gödel’s result is that systems like Automated Reasoning checks can’t be both consistent and complete, so they must choose one. We have chosen to be consistent.

These three difficulties — translating natural language into structured logic, defining truth in the context of ever changing and sometimes contradictory rules, and tackling the complexity of definitive reasoning — are more than mere technical hurdles we face when we try to build AI systems with sound reasoning. They are problems that are deeply rooted in both the limitations of our technology and the intricacies of human systems.

With the forthcoming launch of Automated Reasoning checks in Bedrock Guardrails, we are tackling these challenges through a combination of complementary approaches: applying cross-checking methods to translate from ambiguous natural language to logical predicates, providing flexible frameworks to help customers develop and maintain rule systems, and employing sophisticated SAT solvers while carefully handling cases where definitive answers are not possible. As we work to improve the performance of the product on these challenges, we are not only advancing technology but also deepening our understanding of the fundamental questions that have shaped reasoning itself, from Gödel’s incompleteness theorem to the evolving nature of legal and policy frameworks.

Given our commitment to providing sound reasoning, the road ahead in the AI space is challenging. Challenge accepted!





Source link

Continue Reading

Events & Conferences

Building a human-computer interface for everyone

Published

on


What if you could control any device using only subtle hand movements?

New research from Meta’s Reality Labs is pointing even more firmly toward wrist-worn devices using surface electromyography (sEMG) becoming the future of human-computer interaction.

But how do you develop a wrist-worn input device that works for everyone?

Generalization has been one of the most significant challenges in the field of human-computer interaction (HCI). The machine learning models that power a device can be trained to respond to an individual’s hand gestures, but they struggle to apply that same learning to someone else. Essentially, novel HCI devices are usually one-size-fits-one.

On the latest episode of the Meta Tech Podcast, Pascal Hartig sits down with Sean B., Lauren G., and Jesse M. — research scientists on Meta’s EMG engineering and research team — to discuss how their team is tackling the challenge of generalization and reimagining how we interact with technology. 

They discuss the road to creating a first-of-its-kind, generic human-computer neuromotor interface, what happens when software and hardware engineering meet neuroscience, and more!

Download or listen to the episode below:


You can also find the episode wherever you get your podcasts, including:

The Meta Tech Podcast is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features.

Send us feedback on InstagramThreads, or X.

And if you’re interested in learning more about career opportunities at Meta visit the Meta Careers page.





Source link

Continue Reading

Trending