Connect with us

Events & Conferences

How Meta keeps its AI hardware reliable

Published

on


  • Hardware faults can have a significant impact on AI training and inference.
  • Silent data corruptions (SDCs), undetected data errors caused by hardware, can be particularly harmful for AI systems that rely on accurate data for training as well as providing useful outputs.
  • We are sharing methodologies we deploy at various scales for detecting SDC across our AI and non-AI infrastructure to help ensure the reliability of AI training and inference workloads across Meta.

Meta’s global AI infrastructure consists of a large number of hardware components and servers , connected via network fabric across globally distributed data centers. This setup integrates storage, compute, and network architectures with unique file systems and PyTorch applications tailored for training or inference workloads. This infrastructure supports training large-scale models as well as advanced AI applications such as text-to-image generation and object segmentation.  

Since 2018, Meta’s hardware reliability journey has led to novel findings, identifying unique failure types in disks, CPUs, memories, switches, GPUs, ASICs, and networks, often leading the industry in discovering failure modes. We have developed mitigation policies to ensure smooth infrastructure operation and availability for billions of users and thousands of internal use cases. As we continue to build large AI clusters, understanding hardware failures and mitigation strategies is crucial for the reliable training of large-scale AI models.

Training large-scale models involves thousands of accelerators in a synchronous environment, where any component failure can interrupt or halt the process. We focus on reducing hardware failures during training through detection and diagnostics, and quickly restarting training with healthy servers and accelerators. This involves optimizing fault categorization, device triage, node selection, cluster validation, and checkpoint restore.

From our experience running the Llama 3 herd of models, we find that hardware failures in components such as SRAMs, HBMs, processing grids, and network switch hardware significantly impact AI cluster reliability, with over 66% of training interruptions due to such failures. Some of the challenges for AI clusters include accelerators that might be less reliable than CPUs due to complexity and limited telemetry, network complexity that could result in misattributed failures, and errors within the GPU software stack that may require  extensive configuration to correct. Hence, reducing hardware and configuration failures greatly enhances cluster efficiency.

Types of hardware faults encountered at Meta

The hardware faults or errors that we observe in our infrastructure can be classified broadly into three categories: 

Static errors 

Hardware failures often appear as binary states: A device either powers on or powers off. These static errors are straightforward to identify in large-scale fleets. If devices fail to power on or enumerate, simple health checks can verify their presence and configurations. As configurations and device scales grow in large training clusters, these faults occur more frequently but are easier to triage, root-cause, and repair, making them manageable at scale. 

Transient errors 

Transient errors, categorized by their reproducibility, include load-dependent or partially observable faults, such as device issues from thermal runaway or random crashes from uncorrectable errors. Mitigation involves understanding manifestation conditions; and our larger scale aids in triaging and pattern matching, setting traps for these conditions. When triggered, devices are marked for mitigation or repair. Advances in RAS telemetry in hyperscale infrastructure have greatly improved this process. Factors including workload sensitivity, temperature range, frequency, and manufacturing parameters contribute to these errors.

Mitigation can also involve inducing conditions with artificial workloads in non-production stages to make faults more repeatable. Additionally, capturing transient states as “sticky” status values provides telemetry indications for hardware failures. Though less frequent than static faults and harder to detect, Meta’s scale and our significant engineering efforts have made these scenarios detectable.

Silent errors 

Silent errors or silent data corruptions (SDCs) occur when hardware miscomputes without leaving detectable traces, leading applications to consume incorrect results. These errors, often due to silicon defects, can remain unnoticed for long periods unless significant deviations are observed. Detecting them requires extensive engineering and costly telemetry to trace data corruption back to specific devices. These faults significantly impact large-scale services due to the lack of telemetry and continued consumption.

Case studies, including one where a single computation error led to missing rows in a Spark application, highlight the prevalence of silent errors in hyperscale infrastructures. Historically, soft-error-related bitflips were reduced to one fault per million devices, but with increased silicon density in accelerators, silent data corruptions now occur at about one fault per thousand devices, much higher than cosmic-ray-induced soft errors.

Key challenges presented by SDCs 

SDCs present significant challenges in hyperscale infrastructure due to their data dependency, creating an impractical exponential test space for all possible data values. These faults also depend on device voltage, frequency, operating temperature, and life cycle. For instance, a device may fail computational checks only after months of use, indicating a state of “wear out.” Therefore, consistent, periodic, and frequent testing within a random state space is necessary throughout the device’s life cycle to identify these inaccuracies.

Novel SDC detection mechanisms 

To protect applications from silent data corruption, Meta employs several detection mechanisms, as detailed in the papers, “Detecting Silent Errors in the Wild” and “Hardware Sentinel.”

  1. Fleetscanner: Fleetscanner captures performance outliers at scale with targeted micro-benchmarks for identifying hardware defects. These benchmarks’ signatures are integrated into telemetry for non-benchmark-based detection. This approach involves running directed tests during maintenance operations such as firmware upgrades and hardware repairs. Tests are scheduled periodically, covering the entire fleet every 45 to 60 days. While it provides dedicated testing on hosts, it may be too slow for some SDCs.
  2. Ripple: Ripple co-locates with workloads, executing tests in milliseconds to seconds, allowing fleet-wide coverage in days. It overlaps test instructions across cores and threads, providing faster detection than Fleetscanner.
  3. Hardware Sentinel: This novel, test-and-architecture-agnostic approach evaluates application exceptions in kernel space. It identifies core-based anomalies as silent data corruption without requiring test allocations, operating solely in the analytical plane. Hardware Sentinel outperforms testing-based methods by 41% across architectures, applications, and data centers.

Combined together, these three mechanisms provide one of the best in-fleet coverage at scale, for detecting and protecting our infrastructure against SDCs.

Silent errors in AI hardware 

The methodologies described above execute across the fleet and are fully productionized at scale, detecting SDCs across AI and non-AI infrastructure. However, AI applications such as training and inference have unique and more challenging implications for SDCs. 

SDCs in training workloads

SDCs in training workloads lead to incorrect computations, affecting both forward and backward passes. This results in a divergence from the intended training path, impacting training efficacy. While AI training workloads are sometimes considered self-resilient to SDCs, this is true only for a limited subset of SDC manifestations. In most realistic scenarios, self-resilience is inadequate. SDCs persist across iterations, and the quantization of data values in AI training, which increases information per bit, exacerbates the impact of SDCs, continuously increasing divergence rates in training workloads.

Below we present the two most common cases of training divergence due to SDCs.

Not-a-Number (NaN) propagation 

Not-a-Number (NaN) propagation occurs when an SDC pushes a representable value into an incorrect representation, generating a NaN during training computations. Once a NaN is created, it propagates through subsequent computations, affecting the training iteration, accelerator domain, host domain, and eventually the entire cluster. This widespread NaN contagion can lead to a cluster halt, as the source—often a few specific computations on a single accelerator—may be difficult to trace amidst the cluster’s scale. Identifying and quarantining the offending accelerator and nodes are necessary to resolve the issue.

Corrupted gradient variance 

Corrupted gradient variance occurs when an SDC affects gradient calculations, leading to gradient explosion, implosion, or local minima. This corruption, while within numeric bounds, is mistakenly treated as correct, affecting the entire cluster in synchronous training. The corrupted values are exchanged as true values, causing the training to appear to progress without actual improvement. Over time, SDCs aggregate, causing major divergences in gradients, potentially trapping the algorithm in local minima or causing gradient explosions or implosions.

Detecting these SDCs is challenging due to their subtlety and the time required to observe their effects, which can take weeks or months. Unlike NaN propagation, these corruptions are harder to trace and rectify, as they don’t trigger NaN traps. Consequently, SDCs can lead to significant unproductive use of computational resources and training iterations. Without detection, the root cause remains elusive, making subsequent training risky until the offending device is identified and isolated.

SDCs in inference workloads

In inference applications, SDCs lead to incorrect results, which, due to the scale of operations, affect thousands of inference consumers. Persistent SDCs can directly impact decisions made by systems such as recommendation engines or LLM outputs. These corruptions can bypass policies related to privacy or integrity, as they are not constrained by boundaries. Consequently, inference corruptions significantly reduce the efficacy of models trained with substantial computational resources, making seemingly benign inference use cases problematic at scale.

Impact of SDCs

SDCs in training and inference clusters create complex debugging scenarios across thousands of components. 

In training, visible faults halt the cluster, but SDCs create an illusion of progress, obscuring the fault source. NaN propagation requires identifying the offending node; otherwise, restarts from checkpoints will eventually fail. Corrupted gradient variance prolongs this illusion until variances aggregate, making restarts ineffective. SDCs thus cause significant computational inefficiency, with a larger temporal impact than visible faults.

In inference, triage involves costly telemetry at each substage. Until the offending node is identified, inference clusters can’t be used, risking repeat corruption. Large deviations are easier to detect with anomaly detectors, but smaller ones require extensive debugging. This process involves hundreds of engineers, halting production use cases, and impacting reliable capacity for serving production.

Detection of SDCs in AI hardware 

Mitigation strategies that we run in our infrastructure for dealing with SDCs in AI training workloads are classified into infrastructure strategies and stack strategies:

Infrastructure strategies

These are applied during operational triage at the cluster level. They focus on managing and mitigating SDCs through the physical and network infrastructure, ensuring that the hardware and system-level components are robust and capable of handling errors effectively. 

Reductive triage

This strategy involves conducting a binary search with mini-training iterations on progressively smaller cluster sizes to isolate NaN propagation. The goal is to identify a small cluster that replicates the NaN issue, allowing the offending node to be quarantined for further investigation. A reconstituted cluster with new nodes can then resume training from a saved checkpoint. However, this method relies on the ability to reproduce SDCs, which is not always guaranteed due to their dependence on data, electrical, and temperature variations. For corrupted gradient variance, a similar divide-and-triage approach can be used, but the effectiveness varies with training data and cluster size, despite consistent hyperparameter settings.

Deterministic training

This approach involves running a known effective model for a few training iterations to ensure there are no NaNs or gradient divergences. It helps verify computational failures that are not data-dependent, as it guarantees correctness for a specific set of values and training inputs.

Hyper-checkpointing

This method involves creating checkpoints at increasingly high frequencies to facilitate faster identification and isolation of the corrupting node. It helps maintain training throughput while containing NaN propagation to a specific accelerator or host, thereby speeding up the triage and quarantine process.

Stack strategies

These require coordination with the workload and involve adjustments and enhancements at the software-stack level. This includes implementing error detection and correction mechanisms within the application and software layers to handle SDCs more effectively during training processes.

Gradient clipping

This strategy involves enforcing gradient clipping within the training workload to limit values within a specified range, thereby mitigating NaN propagation. Computations exceeding this range are clipped, and NaNs can be detected during this step by setting them to a max or min value based on the operand sign. While effective for some NaNs depending on representation format, it may introduce partial errors in certain cases.

Algorithmic fault tolerance

This robust approach integrates fault tolerance into training algorithms to handle a range of data corruptions, reducing the need for detection and triage. It enhances computational efficiency with minimal overhead, as demonstrated in CPU training. This method requires understanding common defect modes and investing in engineering across the stack, with modified guarantees to training workloads, albeit with some overhead to the overall training footprint.

Tri-variate computational training architecture

This approach uses shadow nodes in synchronous training to mitigate SDCs. Training steps are repeated across different nodes at random iterations, ensuring correct progress after verification. If shadow and live nodes differ, training halts, and only those nodes are investigated. The rest continue with new nodes. This method involves multiple shadow-node pools, a random training-node pool, and specified steps from the same checkpoint. It offers robust training but demands significant algorithmic changes and increased data movement and infrastructure overhead.

Parameter vulnerability factors

This approach identifies vulnerable and resilient layers in machine-learning architectures, allowing mapping of vulnerable layers to resilient hardware and resilient layers to unprotected hardware. This dynamic evaluation must scale with architecture evolution. Resilience often incurs costs in area, power, or performance, so PVF enables targeted resilient design, especially for inference.

Divergence detection

This mechanism maintains a distribution map for each neuron to detect divergence from typical output distributions, identifying inference corruptions. Though costly, it can be applied at selected sampling rates for large-scale inference. By preserving each neuron’s behavior for specific workloads, divergence helps detect corruptions during execution.

While we have optimized these different methodologies to run effectively in our infrastructure, it should be noted that they offer varying levels of resilience with distinct operating points and engineering/infrastructure overheads. Depending on the scale and intensity of training and inference workloads, orchestrating these strategies effectively can mitigate SDCs’ adverse effects in AI applications.

Performance faults and unknown unknowns!

While SDCs are a major challenge at hyperscale, Meta has been developing solutions to detect performance regressions. ServiceLab, for example, is a large-scale performance testing platform that helps identify tiny performance regressions at scale. In addition,  Fleetscanner has identified hundreds of performance outliers, seen as an emergent fault mode alongside SDCs.

While current mechanisms detect and address static, transient, and silent faults, the full range of hardware fault variants remains partially uncovered. The unknown unknowns require agile solutions across the entire infrastructure and silicon lifecycle, as well as across the hardware-to-software and application stack, to achieve first-class reliability operations.

A journey towards industry leadership and standardization

Meta’s journey toward industry leadership in SDC began with identifying frequent fleet issues in 2016, scaling SDC detection in 2018, and implementing detection frameworks by 2019. By 2020, detection mechanisms were integrated into accelerators, and Meta published the paper, “Silent Data Corruptions at Scale.” In 2022, Meta introduced “FleetScanner and Ripple” and conducted an RFP for academic awards, funding five winners. 

In 2023, Meta collaborated with industry leaders (Google, Microsoft, ARM, AMD, NVIDIA, and Intel) to enhance server resilience, defining test architectures and metrics. A joint RFP with partners from the Open Compute Project selected six winners for cross-domain SDC research. By 2024, Meta’s fleet had advanced AI SDC detection methodologies in production, contributing to research through publications, tutorials, and talks at major conferences and forums, addressing at-scale reliability challenges.

The Meta Training and Inference Accelerator 

Meta is on an ambitious journey toward enabling training and inference accelerators under the Meta Training and Inference Accelerator (MTIA) family. On this journey, our goal is to utilize all the lessons learned from the fleet and move toward industry-leading, fleet-reliability practices in MTIA architecture and design practices. Using the factory-to-fleet approach, and consistently revisiting our reliability solutions across the stack, our goal is to deliver a best-in-class, reliable-and-performant solution to add to our infrastructure portfolio of AI hardware and to power AI applications at scale. 

Factory to fleet

To uncover unknowns early, a comprehensive factory-to-fleet view of the silicon life cycle is key. Innovation is needed in all phases, from design to deployment. In design and architecture, revisiting RAS solutions for scale, life-cycle debug hooks, and telemetry architectures can support tools such as Hardware Sentinel, Fleetscanner, and Ripple. During validation and integration, novel yield analysis, manufacturing diagnostics, and fleet-signature-feedback-based detection can prevent faults before shipping. In AI silicon fleets, user-space diagnostics with periodic testing, coverage maps, and control parameters are beneficial. Large-scale analytics like Hardware Sentinel can detect early wear out and data corruption. Robust firmware hooks and debug architecture provide fast feedback to design and architecture amidst fleet-scale issues.

Stack-level resilience

Factory-to-fleet solutions offer life-cycle resilience for silicon, but resilience must extend beyond silicon to firmware, compilers, kernels, and operating systems. Investments in resilience architectures are needed for correctness-invariant-instruction heterogeneity and enhanced telemetry for exception tracing. Granular firmware-control mechanisms improve telemetry upon fault detection. At the software and application level, techniques like gradient clipping and algorithmic fault tolerance that we called out in this blog, are crucial for security amidst corruptions. Experience with SDCs shows that in-line software resilience and test-agnostic analytical approaches effectively scale for many SDCs with minimal investment, while testing-based approaches are limited to specific instructions.

Hardware faults significantly impact AI training and inference production. As cluster sizes and semiconductor complexity grow, fault complexity will exponentially increase. Solutions must involve factory-to-fleet coordination and stack-level resiliency. For AI applications, treating reliability as a primary design consideration is essential.

Acknowledgments

The authors would like to thank all the cross-functional engineers and teams instrumental in landing these solutions over the years. This blog accompanies the @Scale conference talk, please check out the talk for more details.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Events & Conferences

A better path to pruning large language models

Published

on


In recent years, large language models (LLMs) have revolutionized the field of natural-language processing and made significant contributions to computer vision, speech recognition, and language translation. One of the keys to LLMs’ effectiveness has been the exceedingly large datasets they’re trained on. The trade-off is exceedingly large model sizes, which lead to slower runtimes and higher consumption of computational resources. AI researchers know these challenges well, and many of us are seeking ways to make large models more compact while maintaining their performance.

To this end, we’d like to present a novel philosophy, “Prune Gently, Taste Often”, which focuses on a new way to do pruning, a compression process that removes unimportant connections within the layers of an LLM’s neural network. In a paper we presented at this year’s meeting of the Association for Computational Linguistics (ACL), we describe our framework, Wanda++, which can compress a model with seven billion parameters in under 10 minutes on a single GPU.

Measured according to perplexity, or how well a probability distribution predicts a given sample, our approach improves the model’s performance by 32 percent over its leading predecessor, called Wanda.

A brief history of pruning

Pruning is challenging for a number of reasons. First, training huge LLMs is expensive, and once they’re trained, runtime is expensive too. While pruning can make runtime cheaper, if it’s done later in the build process, it hurts performance. But if it’s done too early in the build process, it further exacerbates the first problem: increasing the cost of training.

When a model is trained, it builds a map of semantic connections gleaned from the training data. These connections, called parameters, gain or lose importance, or weight, as more training data is introduced. Pruning during the training stage, called “pruning-aware training,” is baked into the training recipe and performs model-wide scans of weights, at a high computational cost. What’s worse, pruning-aware training comes with a heavy trial burden of full-scale runs. Researchers must decide when to prune, how often, and what criteria to use to keep pretraining performance viable. Tuning such “hyperparameters” requires repeated model-wide culling experiments, further driving up cost

The other approach to pruning is to do it after the LLM is trained. This tends to be cheaper, taking somewhere between a few minutes and a few hours — compared to the weeks that training can take. And post-training pruning doesn’t require a large number of GPUs.

In this approach, engineers scan the model layer by layer for unimportant weights, as measured by a combination of factors such as how big the weight is and how frequently it factors into the model’s final output. If either number is low, the weight is more likely to be pruned. The problem with this approach is that it isn’t “gentle”: it shocks the structure of the model, which loses accuracy since it doesn’t learn anything from the absence of those weights, as it would have if they had been removed during training.

Striking a balance

Here’s where our philosophy presents a third path. After a model is fully trained, we scan it piece by piece, analyzing weights neither at the whole-model level nor at the layer level but at the level of decoding blocks: smaller, repeating building blocks that make up most of an LLM.

Within each decoding block, we feed in a small amount of data and collect the output to calibrate the weights, pruning the unimportant ones and updating the surviving ones for a few iterations. Since decoder blocks are small — a fraction of the size of the entire model — this approach requires only a single GPU, which can scan a block within minutes.

We liken our approach to the way an expert chef spices a complex dish. In cooking, spices are easy to overlook and hard to add at the right moment — and even risky, if handled poorly. One simply cannot add a heap of tarragon, pepper, and salt at the beginning (pruning-aware training) or at the end (layer-wide pruning) and expect to have the same results as if spices had been added carefully throughout. Similarly, our approach finds a balance between two extremes. Pruning block by block, as we do, is more like spicing a dish throughout the process. Hence the motto of our approach: Prune Gently, Taste Often.

From a technical perspective, the key is focusing on decoding blocks, which are composed of a few neural-network layers such as attention layers, multihead attention layers, and multilayer perceptrons. Even an LLM with seven billion parameters might have just 32 decoder blocks. Each block is small enough — say, 200 million parameters — to easily be scanned by a single GPU. Pruning a model at the block level saves resources by not consuming much GPU memory.
And while all pruning processes initially diminish performance, ours actually brings it back. Every time we scan a block, we balanceg pruning with performance until they’re optimized. Then we move on to the next block. This preserves both performance at the block level and overall model quality. With Wanda++, we’re offering a practical, scalable middle path for the LLM optimization process, especially for teams that don’t control the full training pipeline or budget.

Pruning at the level of the decoder block is “gentle” because the effects of the pruning are localized; they exert less influence on the overall behavior of the model. Repeating the pruning process for each block is like the practice of a chef who “tastes often” to ensure that the spices in the meal under preparation remain in balance.

What’s more, we believe our philosophy also helps address a pain point of LLM development at large companies. Before the era of LLMs, each team built its own models, with the services that a single LLM now provides achieved via orchestration of those models. Since none of the models was huge, each model development team received its own allocation of GPUs. Nowadays, however, computational resources tend to get soaked up by the teams actually training LLMs. With our philosophy, teams working on runtime performance optimization, for instance, could reclaim more GPUs, effectively expanding what they can explore.

Further implementations of Prune Gently, Taste Often could apply to other architectural optimizations. For instance, calibrating a model at the decoder-block level could convert a neural network with a dense structure, called a dense multilayer perceptron, to a less computationally intensive neural network known as a mixture of experts (MoE). In essence, per-decoder-block calibration can enable a surgical redesign of the model by replacing generic components with more efficient and better-performing alternatives such as Kolmogorov-Arnold Networks (KAN). While the Wanda++ philosophy isn’t a cure-all, we believe it opens up an exciting new path for re-thinking model compression and exploring future LLM architectures.





Source link

Continue Reading

Events & Conferences

Three challenges in machine-based reasoning

Published

on


Generative AI has made the past few years the most exhilarating time in my 30+-year career in the space of mechanized reasoning. Why? Because the computer industry and even the general public are now eager to talk about ideas that those of us working in logic have been passionate about for years. The challenges of language, syntax, semantics, validity, soundness, completeness, computational complexity, and even undecidability were previously too academic and obscure to be relevant to the masses. But all of that has changed. To those of you who are now discovering these topics: welcome! Step right in, we’re eager to work with you.

I thought it would be useful to share what I believe are the three most vexing aspects of making correct reasoning work in AI systems, e.g., generative-AI-based systems such as chatbots. The upcoming launch of the Automated-Reasoning-checks capability in Bedrock Guardrails was in fact motivated by these challenges. But we are far from done: due to the inherent difficulty of these problems, we as a community (and we on the Automated-Reasoning-checks team) will be working on these challenges for years to come.

Difficulty #1: Translating from natural to structured language

Humans usually communicate with imprecise and ambiguous language. Often, we are able to infer disambiguating detail from context. In some cases, when it really matters, we will try to clarify with each other (“did you mean to say… ?”). In other cases, even when we really should, we won’t.

This is often a source of confusion and conflict. Imagine that an employer defines eligibility for an employee HR benefit as “having a contract of employment of 0.2 full-time equivalent (FTE) or greater”. Suppose I tell you that I “spend 20% of my time at work, except when I took time off last year to help a family member recover from surgery”. Am I eligible for the benefit? When I said I “spend 20% of my time at work”, does that mean I am spending 20% of my working time, under the terms of a contract?

My statement has multiple reasonable interpretations, each with different outcomes for benefit eligibility. Something we do in Automated Reasoning checks is make multiple attempts to translate between the natural language and query predicates, using complementary approaches. This is a common interview technique: ask for the same information in different ways, and see if the facts stay consistent. In Automated Reasoning checks, we use solvers for formal logic systems to prove/disprove the equivalence of the different interpretations. If the translations differ at the semantic level, the application that uses Automated Reasoning checks can then ask for clarifications (e.g. “Can you confirm that you have a contract of employment for 20% of full time or greater?”).

Automated Reasoning checks use large language models to generate several possible translations of natural language into a formal language. Automated Reasoning checks flag discrepancies between the translations, which customers can resolve through natural-language interactions.

Difficulty #2: Defining truth

Something that never fails to amaze me is how difficult it is for groups of people to agree on the meanings of rules. Complex rules and laws often have subtle contradictions that can go unnoticed until someone tries to reach consensus on their interpretation. The United Kingdom’s Copyrights, Designs, and Patents Act of 1988, for example, contains an inherent contradiction: it defines copyrightable works as those stemming from an author’s original intellectual creation, while simultaneously offering protection to works that require no creative human input — an incoherence that is particularly glaring in this age of AI-generated works.

The second source of trouble is that we seem to always be changing our rules. The US federal government’s per-diem rates, for example, change annually, requiring constant maintenance of any system that depends on those values.

Finally, few people actually deeply understand all of the corner cases of the rules that they are supposed to abide by. Consider the question of wearing earphones while driving: In some US states (e.g., Alaska) it’s illegal; in some states (e.g., Florida) it’s legal to wear one earphone only; while in other states (e.g., Texas), it’s actually legal. In an informal poll, very few of my friends and colleagues were confident in their understanding of the legality of wearing headphones while driving in the place where they most recently drove a car.

Automated Reasoning checks address these challenges by helping customers define what the truth should be in their domains of interest — be they tax codes, HR policies, or other rule systems — and by providing mechanisms for refining those definitions over time, as the rules change. As generative-AI-based (GenAI-based) chatbots emerged, something that captured the imagination of many of us is the idea that complex rule systems could be made accessible to the general public through natural-language queries. Chatbots could in the future give direct and easy-to-understand answers to questions like “Can I make a U-turn when driving in Tokyo, Japan?”, and by addressing the challenge of defining truth, Automated Reasoning checks can help ensure that the answer is reliable.

The user interface for Automated Reasoning checks.

Difficulty #3: definitive reasoning

Imagine we have a set of rules (let’s call it R) and a statement (S) we want to verify. For example, R might be Singapore’s driving code, and S might be a question about U-turns at intersections in Singapore. We can encode R and S into Boolean logic, which computers understand, by combining Boolean variables in various ways.

Let’s say that encoding R and S needs just 500 bits — about 63 characters. This is a tiny amount of information! But even when our encoding of the rule system is small enough to fit in a text message, the number of scenarios we’d need to check is astronomical. In principle, we must consider all 2500 possible combinations before we can authoritatively declare S to be a true statement. A powerful computer today can perform hundreds of millions of operations in the time it takes you to blink. But even if we had all the computers in the world running at this blazing speed since the beginning of time, we still wouldn’t be close to checking all 2500 possibilities today.

Thankfully, the automated-reasoning community has developed a class of sophisticated tools, called SAT solvers, that make this type of combinatorial checking possible and remarkably fast in many (but not all) cases. Automated Reasoning checks make use of these tools when checking the validity of statements.

Unfortunately, not all problems can be encoded in a way that plays to the strengths of SAT solvers. For example, imagine a rule system has the provision “if every even number greater than 2 is the sum of two prime numbers, then the tax withholding rate is 30%; otherwise it’s 40%”. The problem is that to know the tax withholding rate, you need to know whether every even number greater than 2 is the sum of two prime numbers, and no one currently knows whether this is true. This statement is called Goldbach’s conjecture and has been an open problem since 1742. Still, while we don’t know the answer to Goldbach’s conjecture, we do know that it is either true or false, so we can definitively say that the tax withholding rate must be either 30% or 40%.

It’s also fun to think about whether it’s possible for a customer of Automated Reasoning checks to define a policy that is contingent on the output of Automated Reasoning checks. For instance, could the policy encode the rule “access is allowed if and only if Automated Reasoning checks say it is not allowed”? Here, no correct answer is possible, because the rule has created a contradiction by referring recursively to its own checking procedure. The best we can possibly do is answer “Unknown” (which is, in fact, what Automated Reasoning checks will answer in this instance).

The fact that a tool such as Automated Reasoning checks can return neither “true” nor “false” to statements like this was first identified by Kurt Gödel in 1931. What we know from Gödel’s result is that systems like Automated Reasoning checks can’t be both consistent and complete, so they must choose one. We have chosen to be consistent.

These three difficulties — translating natural language into structured logic, defining truth in the context of ever changing and sometimes contradictory rules, and tackling the complexity of definitive reasoning — are more than mere technical hurdles we face when we try to build AI systems with sound reasoning. They are problems that are deeply rooted in both the limitations of our technology and the intricacies of human systems.

With the forthcoming launch of Automated Reasoning checks in Bedrock Guardrails, we are tackling these challenges through a combination of complementary approaches: applying cross-checking methods to translate from ambiguous natural language to logical predicates, providing flexible frameworks to help customers develop and maintain rule systems, and employing sophisticated SAT solvers while carefully handling cases where definitive answers are not possible. As we work to improve the performance of the product on these challenges, we are not only advancing technology but also deepening our understanding of the fundamental questions that have shaped reasoning itself, from Gödel’s incompleteness theorem to the evolving nature of legal and policy frameworks.

Given our commitment to providing sound reasoning, the road ahead in the AI space is challenging. Challenge accepted!





Source link

Continue Reading

Events & Conferences

Building a human-computer interface for everyone

Published

on


What if you could control any device using only subtle hand movements?

New research from Meta’s Reality Labs is pointing even more firmly toward wrist-worn devices using surface electromyography (sEMG) becoming the future of human-computer interaction.

But how do you develop a wrist-worn input device that works for everyone?

Generalization has been one of the most significant challenges in the field of human-computer interaction (HCI). The machine learning models that power a device can be trained to respond to an individual’s hand gestures, but they struggle to apply that same learning to someone else. Essentially, novel HCI devices are usually one-size-fits-one.

On the latest episode of the Meta Tech Podcast, Pascal Hartig sits down with Sean B., Lauren G., and Jesse M. — research scientists on Meta’s EMG engineering and research team — to discuss how their team is tackling the challenge of generalization and reimagining how we interact with technology. 

They discuss the road to creating a first-of-its-kind, generic human-computer neuromotor interface, what happens when software and hardware engineering meet neuroscience, and more!

Download or listen to the episode below:


You can also find the episode wherever you get your podcasts, including:

The Meta Tech Podcast is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features.

Send us feedback on InstagramThreads, or X.

And if you’re interested in learning more about career opportunities at Meta visit the Meta Careers page.





Source link

Continue Reading

Trending