Connect with us

Events & Conferences

Ten university teams selected for Alexa Prize TaskBot Challenge 2

Published

on


Amazon today announced that ten teams from around the globe have been selected to participate in the Alexa Prize TaskBot Challenge year 2, a university challenge focused on developing multimodal (voice and vision) conversational agents that assist customers in completing tasks requiring multiple steps and decisions.

Alexa Prize is a flagship industry-academic collaboration dedicated to accelerating the science of conversational artificial intelligence (AI) and multimodal human-AI interactions.

“Prize competitions provide an agile science experimentation framework for researchers and students encouraging them to explore transformational ideas at the boundaries of what is achievable,” said Reza Ghanadan, senior principal scientist with Alexa AI and head of Alexa Prize. “We have developed the CoBot platform and tools to lower the barriers to AI innovation for both the academic research community and students interested in conversational AI assistants. These tools allow students to quickly deploy their solutions at scale in the real world with Alexa, then observe, evaluate, and enhance their research results using feedback from Alexa customers.”

The Alexa Prize TaskBot Bootcamp was held in Seattle, Washington, with representatives from all ten university teams.

The teams selected for the challenge, which began in January, feature five returning entrants — including the top three finishers in the most recent challenge — and five new universities.

Team

University

Faculty advisor

Returning

TWIZ NOVA School of Science and Technology João Magalhães
EvoquerBOT Penn State University Rui Zhang
Taco 2.0 The Ohio State University Huan Sun
GRILL University of Glasgow Jeff Dalton
Maruna University of Massachusetts Amherst Hamed Zamani

New

BoilerBot Purdue University Julia Rayz
DiWBot Rutgers University Matthew Stone
Sage University of California, Santa Cruz Xin (Eric) Wang
ISABEL University of Pittsburgh Malihe Alikhani
PLAN-Bot Virginia Tech Ismini Lourentzou

The prizes for overall performance in the competition will be $500,000 for the first-place team, $100,000 for second, and $50,000 for third. Those prizes will be paid out to the students on the teams with the best overall performance.

“I am delighted to see that new teams are joining the second year of the competition together with returning teams, who, by competing again, are signaling to us that they found value in the TaskBot challenge, said Yoelle Maarek, vice president research and science for Amazon Shopping.  

“We expect these talented graduate students to continue surprising us, as well as Amazon customers, this year. Connecting academia, Amazonians, and actual customers experimenting with taskbots, is a winning combination to keep pushing the boundaries of science in conversational AI for Alexa to delight and ease the lives of millions of customers.”

The Alexa Prize is a competition for university students dedicated to advancing the field of conversational AI. Launched in 2016, the program was created to recognize students from around the globe who are changing the way we interact with technology.

TaskBot Challenge 2 teams are working to address one of the hardest problems in conversational AI — creating next-generation conversational AI experiences that delight customers by addressing their changing needs as they complete complex tasks. This challenge builds upon the Alexa Prize’s foundation of providing universities a unique opportunity to test cutting-edge machine learning models with actual customers at scale.

The Alexa Prize TaskBot challenge provides a realistic scenario with real-user multimodal interactions, making this the perfect setting to observe and measure human-bot conversations and AI algorithms in a groundbreaking setting.

Rafael Ferreira, NOVA School of Science and Technology, Team TWIZ

Our vision of EvoquerBOT combines improving task completion rates and elevating user satisfaction. To this end, we deliver innovative solutions to fundamental NLP challenges.

haoran_zhang.jpeg

Haoran Zhang, Penn State University, Team EvoquerBOT

We are especially interested in developing innovative ways to achieve successful coordination of multiple modalities, such as visual and verbal elements, and create a more engaging and intuitive user experience.

Lingbo_Mo.JPG

Lingbo Mo, The Ohio State University, Team Taco 2.0

The GRILL team is excited to continue bringing cutting-edge AI research to improve people’s lives. Our research team works on new capabilities of foundation models that understand text, images, and the surrounding world.

Sophie_portrait.jpg

Sophie Fischer, University of Glasgow, Team GRILL

The competition lets us create interfaces for the general public in a production environment – it’s a unique opportunity to connect our research with our career goals.

Baber (Rutgers).jpeg

Baber Khalid, Rutgers University, Team DiWBot

We are very excited to be part of the community and look forward to working with the Alexa team and other teams.

Anthony_Sicilia.jpg

Anthony Sicilia, University of Pittsburgh, Team ISABEL

The Alexa Prize TaskBot Challenge combines a vast range of tasks over multiple domains with multimodal outputs. This is the ultimate test for any moonshot concept, and we can’t wait to see what the real world has in store for us.

purdue 2.jpg

Rey (Alex) Gonzalez, Purdue University, Team BoilerBot

Participating in this competition is an incredible opportunity that will allow us to do applied research and ship it to real users.

ChrisSamarinas_DSC02670.jpg

Chris Samarinas, University of Massachusetts Amherst, Team Maruna

Although artificial intelligence has experienced explosive development in the past decade, there is still a gap between research and real-world application. The TaskBot Challenge provides us with a unique opportunity to explore multimodal AI in practical situations.

UCSC Kaishi TB2.png

Kaizhi Zheng Univerisity of California, Santa Cruz-Amherst, Team Sage

Our bot will make adaptable conversation a reality by allowing customers to follow personalized decisions through the completion of multiple, sequential sub-tasks and adapt to the tools, materials, or ingredients available to the user by proposing appropriate substitutes and alternatives.

Afrina Tabassum

Afrina Tabassum

TaskBot is the first conversational AI challenge to incorporate multimodal customer experiences, so in addition to receiving verbal instructions, customers with Echo Show or Fire TV devices, can also be presented with step-by-step instructions, images, or diagrams that enhance task guidance.

This year’s challenge has been expanded to include more hobbies and at-home activities. Participating teams were asked to propose interesting ways to incorporate visual aids into every conversation turn when a screen is available. Innovative ideas on improving the presentation of visual aids, as well as the coordination of visual and verbal modalities, were part of the team selection criteria.

Each university selected for the challenge receives a $250,000 research grant, Alexa-enabled devices, free Amazon Web Services (AWS) cloud computing services to support their research and development efforts, access to Amazon scientists, the CoBot (conversational bot) toolkit and other tools such as automated speech recognition through Alexa, neural detection and generation models, conversational data sets, and design guidance and development support from the Alexa Prize team.

“Alexa, let’s work together”

The university teams’ taskbots will be available for Alexa customers to engage with in May 2023 with a finals event being held in September, and winners announced later that month.

As with the previous challenge, Alexa customers can engage in conversation with teams’ taskbots when they become available in May by saying, “Alexa, let’s work together.” Until then, “Alexa, let’s work together” will direct you to conversations with the previous challenge winners of 2022 and the Alexa Prize TaskBot.

After initiating the interaction, Alexa customers then receive a brief message informing them that they are interacting with an Alexa Prize university taskbot before being randomly connected to one of the participating taskbots.

After exiting the conversation with the taskbot, which customers can do at any time, the customer is prompted for a verbal rating, followed by an option to provide additional feedback. The interactions, ratings, and feedback are shared with the teams to help them improve their taskbots. Customer ratings are also used to determine which university teams will move on to the semifinals and finals.

Our goal is to contribute to the multimodal conversational AI field and move it closer to the way humans perceive, reason, and communicate through multimodal information.

joao_magalhaes_twiz.jpg

João Magalhães, associate professor, NOVA School of Science and Technology, Team TWIZ

We look forward to the Challenge because it is the perfect platform to create multimodal, tasked-oriented dialogue systems that elevate user experience and engagement.

rui_zhang.jpeg

Rui Zhang, assistant professor, Penn State University, Team EvoquerBOT

Through this TaskBot Challenge, we hope our work can expand the horizon of conversational AI along dimensions like dialogue depth, multi-modal coordination, commonsense reasoning, and learning from use.

Huan_Sun.png

Huan Sun, associate professor, The Ohio State University, Team Taco 2.0

The GRILL team is creating the next generation of open assistants that understand and use knowledge about the world and can communicate effectively to inform and educate.

jeff.jpeg

Jeff Dalton, associate professor, University of Glasgow, Team GRILL

Our TaskBot will help people get things done through personalized, adaptive, and context-aware conversational interaction by combining our research results with the state-of-the-art capabilities of Alexa devices.

Matthew Stone (Rutgers).jpg

Matthew Stone, professor, Rutgers University, Team DiWBot

We work towards making conversational AI technology more inclusive and collaborative. Inclusive Alexa can collaborate with users from diverse cultures and with different communication capabilities and preferences.

Malihe_Alikhani.jpg

Malihe Alikhani, assistant professor, University of Pittsburgh, Team ISABEL

We hope to develop a task-oriented system that can interact with users based on their level of knowledge, experience, and communication preference.

purdue 1.jpg

Julia Rayz, professor, Purdue University, Team BoilerBot

Success in the previous TaskBot Challenge required teams to address many difficult AI obstacles. The challenge required the fusion of multiple AI techniques including knowledge representation and inference, commonsense and causal reasoning, and language understanding and generation.

The “GRILLBot” team from University of Glasgow won the TaskBot 1 Challenge, earning a $500,000 prize for its performance. Teams from NOVA School of Science and Technology (Portgual) and The Ohio State University earned second- and third-place prizes, respectively.

Research papers from Amazon’s Alexa Prize team, and each of the competing teams, can be viewed and downloaded here.

Alexa Prize Taskbot Challenge Finals | Amazon Science





Source link

Events & Conferences

A New Ranking Framework for Better Notification Quality on Instagram

Published

on


  • We’re sharing how Meta is applying machine learning (ML) and diversity algorithms to improve notification quality and user experience. 
  • We’ve introduced a diversity-aware notification ranking framework to reduce uniformity and deliver a more varied and engaging mix of notifications.
  • This new framework reduces the volume of notifications and drives higher engagement rates through more diverse outreach.

Notifications are one of the most powerful tools for bringing people back to Instagram and enhancing engagement. Whether it’s a friend liking your photo, another close friend posting a story, or a suggestion for a reel you might enjoy, notifications help surface moments that matter in real time.

Instagram leverages machine learning (ML) models to decide who should get a notification, when to send it, and what content to include. These models are trained to optimize for user positive engagement such as click-through-rate (CTR) – the probability of a user clicking a notification – as well as other metrics like time spent.

However, while engagement-optimized models are effective at driving interactions, there’s a risk that they might overprioritize the product types and authors someone has previously engaged with. This can lead to overexposure to the same creators or the same product types while overlooking other valuable and diverse experiences. 

This means people could miss out on content that would give them a more balanced, satisfying, and enriched experience. Over time, this can make notifications feel spammy and increase the likelihood that people will disable them altogether. 

The real challenge lies in finding the right balance: How can we introduce meaningful diversity into the notification experience without sacrificing the personalization and relevance people on Instagram have come to expect?

To tackle this, we’ve introduced a diversity-aware notification ranking framework that helps deliver more diverse, better curated, and less repetitive notifications. This framework has significantly reduced daily notification volume while improving CTR. It also introduces several benefits:

  • The extensibility of incorporating customized soft penalty (demotion) logic for each dimension, enabling more adaptive and sophisticated diversity strategies.
  • The flexibility of tuning demotion strength across dimensions like content, author, and product type via adjustable weights.
  • The integration of balancing personalization and diversity, ensuring notifications remain both relevant and varied.

The Risks of Notifications without Diversity

The issue of overexposure in notifications often shows up in two major ways:

Overexposure to the same author: People might receive notifications that are mostly about the same friend. For example, if someone often interacts with content from a particular friend, the system may continue surfacing notifications from that person alone – ignoring other friends they also engage with. This can feel repetitive and one-dimensional, reducing the overall value of notifications.

Overexposure to the same product surface: People might mostly receive notifications from the same product surface such as Stories, even when Feed or Reels could provide value. For example, someone may be interested in both reel and story notifications but has recently interacted more often with stories. Because the system heavily prioritizes past engagement, it sends only story notifications, overlooking the person’s broader interests. 

Introducing Instagram’s Diversity-Aware Notification Ranking Framework

Instagram’s diversity-aware notification ranking framework is designed to enhance the notification experience by balancing the predicted potential for user engagement with the need for content diversity. This framework introduces a diversity layer on top of the existing engagement ML models, applying multiplicative penalties to the candidate scores generated by these models, as figure1, below, shows.

The diversity layer evaluates each notification candidate’s similarity to recently sent notifications across multiple dimensions such as content, author, notification type, and product surface. It then applies carefully calibrated penalties—expressed as multiplicative demotion factors—to downrank candidates that are too similar or repetitive. The adjusted scores are used to re-rank the candidates, enabling the system to select notifications that maintain high engagement potential while introducing meaningful diversity. In the end, the quality bar selects the top-ranked candidate that passes both the ranking and diversity criteria.

Figure.1: Instagram’s diversity-aware ranking framework where the diversity layer sits on top of the existing modeling layer and penalizes notifications that are too similar to recently sent ones.

Mathematical Formulation 

Within the diversity layer, we apply a multiplicative demotion factor to the base relevance score of each candidate. Given a notification candidate 𝑐, we compute its final score as the product of its base ranking score and a diversity demotion multiplier:

\text{Score}(c) = R(c) \times D(c)

where R(c) represents the candidate’s base relevance score, and D(c) ∈ [0,1] is a penalty factor that reduces the score based on similarity to recently sent notifications. We define a set of semantic dimensions (e.g., author, product type) along which we want to promote diversity. For each dimension i, we compute a similarity signal pi(c) between candidate c and the set of historical notifications H, using a maximal marginal relevance (MMR) approach:

p_i(c) = \mathrm{max}_{h \in H}\mathrm{sim}_i(c, h)

where simi(·,·) is a predefined similarity function for dimension i. In our baseline implementation, pi(c) is binary: it equals 1 if the similarity exceeds a threshold 𝜏i and 0 otherwise. 

The final demotion multiplier is defined as: 

D(c) = \prod_{i=1}^{m} \left( 1 - w_i \cdot p_i(c) \right)

where each w∈ [0,1] controls the strength of demotion for its respective dimension. This formulation ensures that candidates similar to previously delivered notifications along one or more dimensions are proportionally down-weighted, reducing redundancy and promoting content variation. The use of a multiplicative penalty allows for flexible control across multiple dimensions, while still preserving high-relevance candidates.

The Future of Diversity-Aware Ranking

As we continue evolving our notification diversity-aware ranking system, a next step is to introduce more adaptive, dynamic demotion strategies. Instead of relying on static rules, we plan to make demotion strength responsive to notification volume and delivery timing. For example, as a user receives more notifications—especially of similar type or in rapid succession—the system progressively applies stronger penalties to new notification candidates, effectively mitigating overwhelming experiences caused by high notification volume or tightly spaced deliveries.

Longer term, we see an opportunity to bring large language models (LLMs) into the diversity pipeline. LLMs can help us go beyond surface-level rules by understanding semantic similarity between messages and rephrasing content in more varied, user-friendly ways. This would allow us to personalize notification experiences with richer language and improved relevance while maintaining diversity across topics, tone, and timing.





Source link

Continue Reading

Events & Conferences

Simplifying book discovery with ML-powered visual autocomplete suggestions

Published

on


Every day, millions of customers search for books in various formats (audiobooks, e-books, and physical books) across Amazon and Audible. Traditional keyword autocomplete suggestions, while helpful, usually require several steps before customers find their desired content. Audible took on the challenge of making book discovery more intuitive and personalized while reducing the number of steps to purchase.

We developed an instant visual autocomplete system that enhances the search experience across Amazon and Audible. As the user begins typing a query, our solution provides visual previews with book covers, enabling direct navigation to relevant landing pages instead of the search result page. It also delivers real-time personalized format recommendations and incorporates multiple searchable entities, such as book pages, author pages, and series pages.

Our system needed to understand user intent from just a few keystrokes and determine the most relevant books to display, all while maintaining low latency for millions of queries. Using historical search data, we match keystrokes to products, transforming partial inputs into meaningful search suggestions. To ensure quality, we implemented confidence-based filtering mechanisms, which are particularly important for distinguishing between general queries like “mystery” and specific title searches. To reflect customers’ most recent interests, the system applies time-decay functions to long historical user interaction data.

Related content

Assessing the absolute utility of query results, rather than just their relative utility, improves learning-to-rank models.

To meet the unique requirements of each use case, we developed two distinct technical approaches. On Audible, we deployed a deep pairwise-learning-to-rank (DeepPLTR) model. The DeepPLTR model considers pairs of books and learns to assign a higher score to the one that better matches the customer query.

The DeepPLTR model’s architecture consists of three specialized towers. The left tower factors in contextual features and recent search patterns using a long-short-term-memory model, which processes data sequentially and considers its prior decisions when issuing a new term in the sequence. The middle tower handles keyword and item engagement history. The right tower factors in customer taste preferences and product descriptions to enable personalization. The model learns from paired examples, but at runtime, it relies on books’ absolute scores to assemble a ranked list.

Training architecture of the DeepPLTR model, which takes in paired examples (green and pink blocks). At runtime, the model scores only a single candidate at a time.

For Amazon, we implemented a two-stage modeling approach involving a probabilistic information-retrieval model to determine the book title that best matches each keyword and a second model that personalizes the book format (audiobooks, e-books, and physical books). This dual-strategy approach maintains low latency while still enabling personalization.

In practice, a customer who types “dungeon craw” in the search bar now sees a visual recommendation for the book Dungeon Crawler Carl, complete with book cover, reducing friction by bypassing a search results page and sending the customer directly to the product detail page. On Audible, the system also personalizes autocomplete results and enriches the discovery experience with relevant connections. These include links to the author’s complete works (Matt Dinniman’s author page) and, for titles that belong to a series, links to the full collection (such as the Dungeon Crawler Carl series).

Related content

Using reinforcement learning improves candidate selection and ranking for search, ad platforms, and recommender systems.

On Amazon, when the customer clicks on the title, the model personalizes the right book-format (audiobooks, e-books, physical books) recommendation and directs the customer to the right product detail page.

In both cases, after the customer has entered a certain number of keystrokes, the system employs a model to detect customer intent (e.g., book title intent for Amazon or author intent for Audible) and determine which visual widget should be displayed.

Audible and Amazon books’ visual autocomplete provides customers with more relevant content more rapidly than traditional autocomplete, and its direct navigation reduces the number of steps to find and access desired books — all while handling millions of queries at low latency.

This technology is not just about making book discovery easier; it is laying the foundation for future improvements in search personalization and visual discovery across Amazon’s ecosystem.

Acknowledgements: Jiun Kim, Sumit Khetan, Armen Stepanyan, Jack Xuan, Nathan Brothers, Eddie Chen, Vincent Lee, Soumy Ladha, Justine Luo, Yuchen Zeng, David Torres, Gali Deutsch, Chaitra Ramdas, Christopher Gomez, Sharmila Tamby, Melissa Ma, Cheng Luo, Jeffrey Jiang, Pavel Fedorov, Ronald Denaux, Aishwarya Vasanth, Azad Bajaj, Mary Heer, Adam Lowe, Jenny Wang, Cameron Cramer, Emmanuel Ankrah, Lydia Diaz, Suzette Islam, Fei Gu, Phil Weaver, Huan Xue, Kimmy Dai, Evangeline Yang, Chao Zhu, Anvy Tran, Jessica Wu, Xiaoxiong Huang, Jiushan Yang





Source link

Continue Reading

Events & Conferences

Revolutionizing warehouse automation with scientific simulation

Published

on


Modern warehouses rely on complex networks of sensors to enable safe and efficient operations. These sensors must detect everything from packages and containers to robots and vehicles, often in changing environments with varying lighting conditions. More important for Amazon, we need to be able to detect barcodes in an efficient way.

Related content

Generative AI supports the creation, at scale, of complex, realistic driving scenarios that can be directed to specific locations and environments.

The Amazon Robotics ID (ARID) team focuses on solving this problem. When we first started working on it, we faced a significant bottleneck: optimizing sensor placement required weeks or months of physical prototyping and real-world testing, severely limiting our ability to explore innovative solutions.

To transform this process, we developed Sensor Workbench (SWB), a sensor simulation platform built on NVIDIA’s Isaac Sim that combines parallel processing, physics-based sensor modeling, and high-fidelity 3-D environments. By providing virtual testing environments that mirror real-world conditions with unprecedented accuracy, SWB allows our teams to explore hundreds of configurations in the same amount of time it previously took to test just a few physical setups.

Camera and target selection/positioning

Sensor Workbench users can select different cameras and targets and position them in 3-D space to receive real-time feedback on barcode decodability.

Three key innovations enabled SWB: a specialized parallel-computing architecture that performs simulation tasks across the GPU; a custom CAD-to-OpenUSD (Universal Scene Description) pipeline; and the use of OpenUSD as the ground truth throughout the simulation process.

Parallel-computing architecture

Our parallel-processing pipeline leverages NVIDIA’s Warp library with custom computation kernels to maximize GPU utilization. By maintaining 3-D objects persistently in GPU memory and updating transforms only when objects move, we eliminate redundant data transfers. We also perform computations only when needed — when, for instance, a sensor parameter changes, or something moves. By these means, we achieve real-time performance.

Visualization methods

Sensor Workbench users can pick sphere- or plane-based visualizations, to see how the positions and rotations of individual barcodes affect performance.

This architecture allows us to perform complex calculations for multiple sensors simultaneously, enabling instant feedback in the form of immersive 3-D visuals. Those visuals represent metrics that barcode-detection machine-learning models need to work, as teams adjust sensor positions and parameters in the environment.

CAD to USD

Our second innovation involved developing a custom CAD-to-OpenUSD pipeline that automatically converts detailed warehouse models into optimized 3-D assets. Our CAD-to-USD conversion pipeline replicates the structure and content of models created in the modeling program SolidWorks with a 1:1 mapping. We start by extracting essential data — including world transforms, mesh geometry, material properties, and joint information — from the CAD file. The full assembly-and-part hierarchy is preserved so that the resulting USD stage mirrors the CAD tree structure exactly.

Related content

Two Alexa AI papers present novel methodologies that use vision and language understanding to improve embodied task completion in simulated environments.

To ensure modularity and maintainability, we organize the data into separate USD layers covering mesh, materials, joints, and transforms. This layered approach ensures that the converted USD file faithfully retains the asset structure, geometry, and visual fidelity of the original CAD model, enabling accurate and scalable integration for real-time visualization, simulation, and collaboration.

OpenUSD as ground truth

The third important factor was our novel approach to using OpenUSD as the ground truth throughout the entire simulation process. We developed custom schemas that extend beyond basic 3-D-asset information to include enriched environment descriptions and simulation parameters. Our system continuously records all scene activities — from sensor positions and orientations to object movements and parameter changes — directly into the USD stage in real time. We even maintain user interface elements and their states within USD, enabling us to restore not just the simulation configuration but the complete user interface state as well.

This architecture ensures that when USD initial configurations change, the simulation automatically adapts without requiring modifications to the core software. By maintaining this live synchronization between the simulation state and the USD representation, we create a reliable source of truth that captures the complete state of the simulation environment, allowing users to save and re-create simulation configurations exactly as needed. The interfaces simply reflect the state of the world, creating a flexible and maintainable system that can evolve with our needs.

Application

With SWB, our teams can now rapidly evaluate sensor mounting positions and verify overall concepts in a fraction of the time previously required. More importantly, SWB has become a powerful platform for cross-functional collaboration, allowing engineers, scientists, and operational teams to work together in real time, visualizing and adjusting sensor configurations while immediately seeing the impact of their changes and sharing their results with each other.

New perspectives

In projection mode, an explicit target is not needed. Instead, Sensor Workbench uses the whole environment as a target, projecting rays from the camera to identify locations for barcode placement. Users can also switch between a comprehensive three-quarters view and the perspectives of individual cameras.

Due to the initial success in simulating barcode-reading scenarios, we have expanded SWB’s capabilities to incorporate high-fidelity lighting simulations. This allows teams to iterate on new baffle and light designs, further optimizing the conditions for reliable barcode detection, while ensuring that lighting conditions are safe for human eyes, too. Teams can now explore various lighting conditions, target positions, and sensor configurations simultaneously, gleaning insights that would take months to accumulate through traditional testing methods.

Related content

Amazon researchers draw inspiration from finite-volume methods and adapt neural operators to enforce conservation laws and boundary conditions in deep-learning models of physical systems.

Looking ahead, we are working on several exciting enhancements to the system. Our current focus is on integrating more-advanced sensor simulations that combine analytical models with real-world measurement feedback from the ARID team, further increasing the system’s accuracy and practical utility. We are also exploring the use of AI to suggest optimal sensor placements for new station designs, which could potentially identify novel configurations that users of the tool might not consider.

Additionally, we are looking to expand the system to serve as a comprehensive synthetic-data generation platform. This will go beyond just simulating barcode-detection scenarios, providing a full digital environment for testing sensors and algorithms. This capability will let teams validate and train their systems using diverse, automatically generated datasets that capture the full range of conditions they might encounter in real-world operations.

By combining advanced scientific computing with practical industrial applications, SWB represents a significant step forward in warehouse automation development. The platform demonstrates how sophisticated simulation tools can dramatically accelerate innovation in complex industrial systems. As we continue to enhance the system with new capabilities, we are excited about its potential to further transform and set new standards for warehouse automation.





Source link

Continue Reading

Trending