Events & Conferences
Robotics at Amazon – Amazon Science

The International Conference on Robotics and Automation (ICRA), the major conference in the field of robotics, takes place this week, and Amazon is one of its silver sponsors. To mark the occasion, Amazon Science sat down with three of Amazon’s leading roboticists to discuss the challenges of building robotic systems that interact with human beings in real-world settings.
As the director of Amazon Robotics AI, Siddhartha (Sidd) Srinivasa is responsible for the algorithms that govern the autonomous robots that assist employees in Amazon fulfillment centers, including robots that can pick up and package products and the autonomous carts that carry products from the shelves to the packaging stations.
More about robotics at Amazon
Learn more about robotics at Amazon — including job opportunities — and about Amazon’s participation at ICRA.
Tye Brady, the chief technologist for global Amazon Robotics, helps shape Amazon’s robotics strategy and oversees university outreach for robotics.
Philipp Michel is the senior manager of applied science for Amazon Scout, an autonomous delivery robot that moves along public sidewalks at a walking pace and is currently being field-tested in four U.S. states.
Amazon Science: There are a lot of differences between the problems you’re addressing, but I wondered what the commonalities are.
Sidd Srinivasa: The thing that makes our problem incredibly hard is that we live in an open world. We don’t even know what the inputs that we might face are. In our fulfillment centers, I need to manipulate over 20 million items, and that increases by several hundreds of thousands every day. Oftentimes, our robots have absolutely no idea what they’re picking up, but they need to be able to pick it up carefully without damaging it and package it effortlessly.
Philipp Michel: For Scout, it’s the objects we encounter on the sidewalk, as well as the environment. We operate personal delivery devices in four different U.S. states. The weather conditions, lighting conditions — there’s a huge amount of variability that we explicitly wanted to tackle from the get-go to expose ourselves to all of those difficult, difficult problems.
Tye Brady: For the development of our fulfillment robotics, we have a significant advantage in that we operate in a semi-structured environment. We get to set the rules of the road. Knowing the environment really helps our scientists and engineers contextualize and understand the objects we have to move, manipulate, sort, and identify to fulfill any order. This is a significant advantage in that it gives us real-world project context to pursue our plans for technology development
Philipp Michel: Another commonality, if it isn’t obvious, is that we rely very heavily on learning from data to solve our problems. For Scout, that is all of the real-world data that the robot receives on its missions, which we continuously try to iterate on to develop machine learning solutions for perception, for localization to a degree, and eventually for navigation as well.
Sidd Srinivasa: Yeah, I completely agree with that. I think that machine learning and adaptive control are critical for superlinear scaling. If we have tens, hundreds, thousands of robots deployed, we can’t have tens, hundreds, thousands of scientists and engineers working on them. We need to scale superlinearly with respect to that.
And I think the open world compels us to think about continual learning. Our machine learning models are trained on some input data distribution. But because of an open world, we have what’s called covariate shift, which is that the data that you see doesn’t match the distribution, and that causes your machine learning model often to be unreasonably overconfident.
So a lot of work that we do is on creating watchdogs that can identify when the input data distribution has deviated from the distribution that it was trained on. Secondly, we do what we call importance sampling such that we can actually pick out the pieces that have changed and retrain our machine learning models.
Philipp Michel: This is again one of the reasons why we want to have this forcing function of being in a wide variety of different places, so we get exposed to those things as quickly as possible and so that it forces us to develop solutions that handle all of that novel data.
Sidd Srinivasa: That’s a great point that I want to continue to highlight. One of the advantages of having multiple robots is the ability for one system to identify that something has changed, to retrain, and then to share that knowledge to the rest of the robots.
We have an anecdote of that in one of our picking robots. There was a robot in one part of the world that noticed a new package type that came by. It struggled mightily at the beginning because it had never seen that and identified that it was struggling. The solution was rectified, and then it was able to transmit the model to all the other robots in the world such that even before this new package type arrived in some of those locations, those robots were prepared to address it. So there was a blip, but that blip occurred only in one location, and all the other locations were prepared to address that because this system was able to retrain itself and share that information.
Philipp Michel: Our bots do similar things. If there are new types of obstacles that we haven’t encountered before, we try to adjust our models to recognize those and handle those, and then that gets deployed to all of the bots.
One of the things that keeps me up at night is that we encounter things on the sidewalk that we may not see again for three years. Specific kinds of stone gargoyles used as Halloween decorations on people’s lawns. Or somebody deconstructed a picnic table that had an umbrella, so it is not recognizable as a picnic table to any ML [machine learning] algorithm.
One of the advantages of having multiple robots is the ability to identify that something has changed, to retrain, and then to share that knowledge to the rest of the robots.
Sidd Srinivasa, director of Amazon Robotics AI
So some of our scientific work is on how we balance between generic things that detect that there is something you should not be driving over and things that are quite specific. If it’s an open manhole cover, we need to get very good at recognizing that. Whereas if it’s just some random box, we might not need a specific hierarchy of boxes — just that it is something that we should not be traversing.
Sidd Srinivasa: Another challenge is that when you do change your model, it can have unforeseen consequences. Your model might change in some way that perhaps doesn’t affect your perception but maybe changes the way your robot brakes, and that leads to the wearing of your ball bearings two months from now. We work with these end-to-end systems, where a lot of interesting future research is in being able to understand the consequences of changing parts of the system on the entire system performance.
Philipp Michel: We spent a lot of time thinking about to what degree we should compartmentalize the different parts of the robot stack. There are lots of benefits to trying to be more integrative across them. But there’s a limit to that. One extreme is the cameras-to-motor-torques kind of learning that is very challenging in any real-world robotics application. And then there is the traditional robotics stack, which is well separated into localization, perception, planning, and controls.
We also spend a lot of time thinking about how the stack should evolve over time. What performance gains can we get when we more tightly couple some of these parts? At the same time, we want to have a system that remains as explainable as possible. A lot of thought goes into how we can leverage more integration of the learned components across the stack while at the same time retaining the amounts of explainability and safety functionality that we need.
Sidd Srinivasa: That’s a great point. I completely agree with Philipp that one model to rule them all may not necessarily be the right answer. But oftentimes we end up building machine learning models that share a common backbone but have multiple heads for multiple applications. What an object is, what it means to segment an object, might be similar for picking or stowing or for packaging, but then each of those might require specialized heads that sit on top of a backbone for those specialized tasks.
Philipp Michel: Some factors we consider are battery, range, temperature, space, and compute limitations. So we need to be very efficient in the models that we use and how we optimize them and how we try to leverage as much shared backbone across them as possible with, as Sidd mentioned, different heads for different tasks.
Tye Brady: The nice thing about what Sidd and Philipp describe is that there is always a person to help. The robot can ask another robot through AWS for a different sample or perspective, but the true power comes from asking one of our employees for help in how to perceive or problem-solve. This is super important because the robot can learn from this interaction, allowing our employees to focus on higher-level tasks, things you and I would call common sense. That is not so easy in the robotics world, but we are working to design our machines to understand intent and redirection to reinforce systemic models our robots have of the world. All three of us have that in common.
Amazon Science: When I asked about the commonalities between your projects, one of the things I was thinking about is that you all have robots that are operating in the same environments as humans. How does that complicate the problem?
Tye Brady: When we design our machines right, humans never complicate the problem; they only make it easier. It is up to us to make machines that enhance our human environment by providing a safety benefit and a convenience to our employees. A well-designed machine may fill a deficit for employees that’s not possible without a machine. Either way, our robotics should make us more intelligent, more capable, and freer to do the things that matter most to us.
Philipp Michel: Our direct interactions with our customers and the community are of utmost importance for us. So there’s a lot of work that we do on the CX [customer experience] side in trying to make that as delightful as possible.
Another thing that’s important for us is that the robot has delightful and safe and understandable interactions with people who might not be customers but whom the robot encounters on its way. People haven’t really been exposed to autonomous delivery devices before. So we think a lot about what those interactions should look like on the sidewalk.
A big part of our identity is not just the appearance but how it manifests it through its motion and its yielding behaviors
Philipp Michel, senior manager of applied science for Amazon Scout
On the one hand, you should try to act as much as a normal traffic participant would as possible, because that’s what people are used to. But on the other hand, people are not used to this new device, so they don’t necessarily assume it’s going to act like a pedestrian. It’s something that we constantly think about. And that’s not just at the product level; it really flows down to the bot behavior, which ultimately is controlled by the entire stack. A big part of our identity is not just the appearance but how it manifests it through its motion and its yielding behaviors and all of those kinds of things.
Sidd Srinivasa: Our robots are entering people’s worlds. And so we have to be respectful of all the complicated interactions that happen inside our human worlds. When we walk, when we drive, there is this complex social dance that we do in addition to the tasks that we are performing. And it’s important for our robots, first of all, to have awareness of it and, secondly, to participate in it.
And it’s really hard, I must say. When you’re driving, it’s sometimes hard to tell what other people are thinking about. And then it’s hard to decide how you want to act based on what they’re thinking about. So just the inference problem is hard, and then closing the loop is even harder.
If you’re playing chess or go against a human, then it’s easier to predict what they’re going to do, because the rules are well laid out. If you play assuming that your opponent is optimal, then you’re going to do well, even if they are suboptimal. That’s a guarantee in certain two-player games.
But that’s not the case here. We’re playing this sort of cooperative game of making sure everybody wins. And when you’re playing these sorts of cooperative games, then it’s actually very, very hard to predict even the good intentions of the other agents that you’re working with.
Philipp Michel: And behavior varies widely. We have times when pets completely ignore the robot, could not care at all, and we have times when the dog goes straight towards the bot. And it’s similar with pedestrians. Some just ignore the bot, while others come right up to it. Particularly kids: they’re super curious and interact very closely. We need to be able to handle all of those types of scenarios safely. All of that variability makes the problem super exciting.
Tye Brady: It is an exciting time to be in robotics at Amazon! If any roboticists are out there listening, come join us. It’s wicked awesome.
Events & Conferences
A New Ranking Framework for Better Notification Quality on Instagram

- We’re sharing how Meta is applying machine learning (ML) and diversity algorithms to improve notification quality and user experience.
- We’ve introduced a diversity-aware notification ranking framework to reduce uniformity and deliver a more varied and engaging mix of notifications.
- This new framework reduces the volume of notifications and drives higher engagement rates through more diverse outreach.
Notifications are one of the most powerful tools for bringing people back to Instagram and enhancing engagement. Whether it’s a friend liking your photo, another close friend posting a story, or a suggestion for a reel you might enjoy, notifications help surface moments that matter in real time.
Instagram leverages machine learning (ML) models to decide who should get a notification, when to send it, and what content to include. These models are trained to optimize for user positive engagement such as click-through-rate (CTR) – the probability of a user clicking a notification – as well as other metrics like time spent.
However, while engagement-optimized models are effective at driving interactions, there’s a risk that they might overprioritize the product types and authors someone has previously engaged with. This can lead to overexposure to the same creators or the same product types while overlooking other valuable and diverse experiences.
This means people could miss out on content that would give them a more balanced, satisfying, and enriched experience. Over time, this can make notifications feel spammy and increase the likelihood that people will disable them altogether.
The real challenge lies in finding the right balance: How can we introduce meaningful diversity into the notification experience without sacrificing the personalization and relevance people on Instagram have come to expect?
To tackle this, we’ve introduced a diversity-aware notification ranking framework that helps deliver more diverse, better curated, and less repetitive notifications. This framework has significantly reduced daily notification volume while improving CTR. It also introduces several benefits:
- The extensibility of incorporating customized soft penalty (demotion) logic for each dimension, enabling more adaptive and sophisticated diversity strategies.
- The flexibility of tuning demotion strength across dimensions like content, author, and product type via adjustable weights.
- The integration of balancing personalization and diversity, ensuring notifications remain both relevant and varied.
The Risks of Notifications without Diversity
The issue of overexposure in notifications often shows up in two major ways:
Overexposure to the same author: People might receive notifications that are mostly about the same friend. For example, if someone often interacts with content from a particular friend, the system may continue surfacing notifications from that person alone – ignoring other friends they also engage with. This can feel repetitive and one-dimensional, reducing the overall value of notifications.
Overexposure to the same product surface: People might mostly receive notifications from the same product surface such as Stories, even when Feed or Reels could provide value. For example, someone may be interested in both reel and story notifications but has recently interacted more often with stories. Because the system heavily prioritizes past engagement, it sends only story notifications, overlooking the person’s broader interests.
Introducing Instagram’s Diversity-Aware Notification Ranking Framework
Instagram’s diversity-aware notification ranking framework is designed to enhance the notification experience by balancing the predicted potential for user engagement with the need for content diversity. This framework introduces a diversity layer on top of the existing engagement ML models, applying multiplicative penalties to the candidate scores generated by these models, as figure1, below, shows.
The diversity layer evaluates each notification candidate’s similarity to recently sent notifications across multiple dimensions such as content, author, notification type, and product surface. It then applies carefully calibrated penalties—expressed as multiplicative demotion factors—to downrank candidates that are too similar or repetitive. The adjusted scores are used to re-rank the candidates, enabling the system to select notifications that maintain high engagement potential while introducing meaningful diversity. In the end, the quality bar selects the top-ranked candidate that passes both the ranking and diversity criteria.
Mathematical Formulation
Within the diversity layer, we apply a multiplicative demotion factor to the base relevance score of each candidate. Given a notification candidate 𝑐, we compute its final score as the product of its base ranking score and a diversity demotion multiplier:
where R(c) represents the candidate’s base relevance score, and D(c) ∈ [0,1] is a penalty factor that reduces the score based on similarity to recently sent notifications. We define a set of semantic dimensions (e.g., author, product type) along which we want to promote diversity. For each dimension i, we compute a similarity signal pi(c) between candidate c and the set of historical notifications H, using a maximal marginal relevance (MMR) approach:
where simi(·,·) is a predefined similarity function for dimension i. In our baseline implementation, pi(c) is binary: it equals 1 if the similarity exceeds a threshold 𝜏i and 0 otherwise.
The final demotion multiplier is defined as:
where each wi ∈ [0,1] controls the strength of demotion for its respective dimension. This formulation ensures that candidates similar to previously delivered notifications along one or more dimensions are proportionally down-weighted, reducing redundancy and promoting content variation. The use of a multiplicative penalty allows for flexible control across multiple dimensions, while still preserving high-relevance candidates.
The Future of Diversity-Aware Ranking
As we continue evolving our notification diversity-aware ranking system, a next step is to introduce more adaptive, dynamic demotion strategies. Instead of relying on static rules, we plan to make demotion strength responsive to notification volume and delivery timing. For example, as a user receives more notifications—especially of similar type or in rapid succession—the system progressively applies stronger penalties to new notification candidates, effectively mitigating overwhelming experiences caused by high notification volume or tightly spaced deliveries.
Longer term, we see an opportunity to bring large language models (LLMs) into the diversity pipeline. LLMs can help us go beyond surface-level rules by understanding semantic similarity between messages and rephrasing content in more varied, user-friendly ways. This would allow us to personalize notification experiences with richer language and improved relevance while maintaining diversity across topics, tone, and timing.
Events & Conferences
Simplifying book discovery with ML-powered visual autocomplete suggestions

Every day, millions of customers search for books in various formats (audiobooks, e-books, and physical books) across Amazon and Audible. Traditional keyword autocomplete suggestions, while helpful, usually require several steps before customers find their desired content. Audible took on the challenge of making book discovery more intuitive and personalized while reducing the number of steps to purchase.
We developed an instant visual autocomplete system that enhances the search experience across Amazon and Audible. As the user begins typing a query, our solution provides visual previews with book covers, enabling direct navigation to relevant landing pages instead of the search result page. It also delivers real-time personalized format recommendations and incorporates multiple searchable entities, such as book pages, author pages, and series pages.
Our system needed to understand user intent from just a few keystrokes and determine the most relevant books to display, all while maintaining low latency for millions of queries. Using historical search data, we match keystrokes to products, transforming partial inputs into meaningful search suggestions. To ensure quality, we implemented confidence-based filtering mechanisms, which are particularly important for distinguishing between general queries like “mystery” and specific title searches. To reflect customers’ most recent interests, the system applies time-decay functions to long historical user interaction data.
To meet the unique requirements of each use case, we developed two distinct technical approaches. On Audible, we deployed a deep pairwise-learning-to-rank (DeepPLTR) model. The DeepPLTR model considers pairs of books and learns to assign a higher score to the one that better matches the customer query.
The DeepPLTR model’s architecture consists of three specialized towers. The left tower factors in contextual features and recent search patterns using a long-short-term-memory model, which processes data sequentially and considers its prior decisions when issuing a new term in the sequence. The middle tower handles keyword and item engagement history. The right tower factors in customer taste preferences and product descriptions to enable personalization. The model learns from paired examples, but at runtime, it relies on books’ absolute scores to assemble a ranked list.
For Amazon, we implemented a two-stage modeling approach involving a probabilistic information-retrieval model to determine the book title that best matches each keyword and a second model that personalizes the book format (audiobooks, e-books, and physical books). This dual-strategy approach maintains low latency while still enabling personalization.
In practice, a customer who types “dungeon craw” in the search bar now sees a visual recommendation for the book Dungeon Crawler Carl, complete with book cover, reducing friction by bypassing a search results page and sending the customer directly to the product detail page. On Audible, the system also personalizes autocomplete results and enriches the discovery experience with relevant connections. These include links to the author’s complete works (Matt Dinniman’s author page) and, for titles that belong to a series, links to the full collection (such as the Dungeon Crawler Carl series).
On Amazon, when the customer clicks on the title, the model personalizes the right book-format (audiobooks, e-books, physical books) recommendation and directs the customer to the right product detail page.
In both cases, after the customer has entered a certain number of keystrokes, the system employs a model to detect customer intent (e.g., book title intent for Amazon or author intent for Audible) and determine which visual widget should be displayed.
Audible and Amazon books’ visual autocomplete provides customers with more relevant content more rapidly than traditional autocomplete, and its direct navigation reduces the number of steps to find and access desired books — all while handling millions of queries at low latency.
This technology is not just about making book discovery easier; it is laying the foundation for future improvements in search personalization and visual discovery across Amazon’s ecosystem.
Acknowledgements: Jiun Kim, Sumit Khetan, Armen Stepanyan, Jack Xuan, Nathan Brothers, Eddie Chen, Vincent Lee, Soumy Ladha, Justine Luo, Yuchen Zeng, David Torres, Gali Deutsch, Chaitra Ramdas, Christopher Gomez, Sharmila Tamby, Melissa Ma, Cheng Luo, Jeffrey Jiang, Pavel Fedorov, Ronald Denaux, Aishwarya Vasanth, Azad Bajaj, Mary Heer, Adam Lowe, Jenny Wang, Cameron Cramer, Emmanuel Ankrah, Lydia Diaz, Suzette Islam, Fei Gu, Phil Weaver, Huan Xue, Kimmy Dai, Evangeline Yang, Chao Zhu, Anvy Tran, Jessica Wu, Xiaoxiong Huang, Jiushan Yang
Events & Conferences
Revolutionizing warehouse automation with scientific simulation

Modern warehouses rely on complex networks of sensors to enable safe and efficient operations. These sensors must detect everything from packages and containers to robots and vehicles, often in changing environments with varying lighting conditions. More important for Amazon, we need to be able to detect barcodes in an efficient way.
The Amazon Robotics ID (ARID) team focuses on solving this problem. When we first started working on it, we faced a significant bottleneck: optimizing sensor placement required weeks or months of physical prototyping and real-world testing, severely limiting our ability to explore innovative solutions.
To transform this process, we developed Sensor Workbench (SWB), a sensor simulation platform built on NVIDIA’s Isaac Sim that combines parallel processing, physics-based sensor modeling, and high-fidelity 3-D environments. By providing virtual testing environments that mirror real-world conditions with unprecedented accuracy, SWB allows our teams to explore hundreds of configurations in the same amount of time it previously took to test just a few physical setups.
Camera and target selection/positioning
Sensor Workbench users can select different cameras and targets and position them in 3-D space to receive real-time feedback on barcode decodability.
Three key innovations enabled SWB: a specialized parallel-computing architecture that performs simulation tasks across the GPU; a custom CAD-to-OpenUSD (Universal Scene Description) pipeline; and the use of OpenUSD as the ground truth throughout the simulation process.
Parallel-computing architecture
Our parallel-processing pipeline leverages NVIDIA’s Warp library with custom computation kernels to maximize GPU utilization. By maintaining 3-D objects persistently in GPU memory and updating transforms only when objects move, we eliminate redundant data transfers. We also perform computations only when needed — when, for instance, a sensor parameter changes, or something moves. By these means, we achieve real-time performance.
Visualization methods
Sensor Workbench users can pick sphere- or plane-based visualizations, to see how the positions and rotations of individual barcodes affect performance.
This architecture allows us to perform complex calculations for multiple sensors simultaneously, enabling instant feedback in the form of immersive 3-D visuals. Those visuals represent metrics that barcode-detection machine-learning models need to work, as teams adjust sensor positions and parameters in the environment.
CAD to USD
Our second innovation involved developing a custom CAD-to-OpenUSD pipeline that automatically converts detailed warehouse models into optimized 3-D assets. Our CAD-to-USD conversion pipeline replicates the structure and content of models created in the modeling program SolidWorks with a 1:1 mapping. We start by extracting essential data — including world transforms, mesh geometry, material properties, and joint information — from the CAD file. The full assembly-and-part hierarchy is preserved so that the resulting USD stage mirrors the CAD tree structure exactly.
To ensure modularity and maintainability, we organize the data into separate USD layers covering mesh, materials, joints, and transforms. This layered approach ensures that the converted USD file faithfully retains the asset structure, geometry, and visual fidelity of the original CAD model, enabling accurate and scalable integration for real-time visualization, simulation, and collaboration.
OpenUSD as ground truth
The third important factor was our novel approach to using OpenUSD as the ground truth throughout the entire simulation process. We developed custom schemas that extend beyond basic 3-D-asset information to include enriched environment descriptions and simulation parameters. Our system continuously records all scene activities — from sensor positions and orientations to object movements and parameter changes — directly into the USD stage in real time. We even maintain user interface elements and their states within USD, enabling us to restore not just the simulation configuration but the complete user interface state as well.
This architecture ensures that when USD initial configurations change, the simulation automatically adapts without requiring modifications to the core software. By maintaining this live synchronization between the simulation state and the USD representation, we create a reliable source of truth that captures the complete state of the simulation environment, allowing users to save and re-create simulation configurations exactly as needed. The interfaces simply reflect the state of the world, creating a flexible and maintainable system that can evolve with our needs.
Application
With SWB, our teams can now rapidly evaluate sensor mounting positions and verify overall concepts in a fraction of the time previously required. More importantly, SWB has become a powerful platform for cross-functional collaboration, allowing engineers, scientists, and operational teams to work together in real time, visualizing and adjusting sensor configurations while immediately seeing the impact of their changes and sharing their results with each other.
New perspectives
In projection mode, an explicit target is not needed. Instead, Sensor Workbench uses the whole environment as a target, projecting rays from the camera to identify locations for barcode placement. Users can also switch between a comprehensive three-quarters view and the perspectives of individual cameras.
Due to the initial success in simulating barcode-reading scenarios, we have expanded SWB’s capabilities to incorporate high-fidelity lighting simulations. This allows teams to iterate on new baffle and light designs, further optimizing the conditions for reliable barcode detection, while ensuring that lighting conditions are safe for human eyes, too. Teams can now explore various lighting conditions, target positions, and sensor configurations simultaneously, gleaning insights that would take months to accumulate through traditional testing methods.
Looking ahead, we are working on several exciting enhancements to the system. Our current focus is on integrating more-advanced sensor simulations that combine analytical models with real-world measurement feedback from the ARID team, further increasing the system’s accuracy and practical utility. We are also exploring the use of AI to suggest optimal sensor placements for new station designs, which could potentially identify novel configurations that users of the tool might not consider.
Additionally, we are looking to expand the system to serve as a comprehensive synthetic-data generation platform. This will go beyond just simulating barcode-detection scenarios, providing a full digital environment for testing sensors and algorithms. This capability will let teams validate and train their systems using diverse, automatically generated datasets that capture the full range of conditions they might encounter in real-world operations.
By combining advanced scientific computing with practical industrial applications, SWB represents a significant step forward in warehouse automation development. The platform demonstrates how sophisticated simulation tools can dramatically accelerate innovation in complex industrial systems. As we continue to enhance the system with new capabilities, we are excited about its potential to further transform and set new standards for warehouse automation.
-
Business6 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions