Connect with us

Tools & Platforms

Cities and facility operators turn to AI for recycling education revamp

Published

on


This audio is auto-generated. Please let us know if you have feedback.

 Artificial intelligence is changing the face of waste education. City staff and facility operators are looking beyond just paper mailers and public service announcements to cameras, screens and text prompts. The technology is designed to meet users at the bins and help them navigate the complexities of local recycling rules. 

AI has been used to identify items and trigger physical separation within MRFs for years, and more recently it’s even been implemented to track contamination levels at the point of collection when haulers integrate camera systems into their trucks. But neither of these use cases go as far upstream as trying to help consumers make the right decision about where to put their refuse, recyclables or organics.

“This was really an effort to help people better understand small nuances, and it also gave us new insights about common questions and gaps in knowledge,” said Pamela Perez, the marketing manager for LA Sanitation & Environment, which recently introduced Professor Green, an AI tool for organics recycling education. 

Proponents of these AI feedback systems hope they can cut down on contamination and increase capture rates, but the technology is still evolving.

Whether the tools are chatbots or item-identifying cameras, operators have to tinker with the software over time and hone its ability to offer users the right responses. Experts say that accurate item identification — particularly at the curb, where trash and recycling come in an almost infinite number of forms — could be a challenge for some iterations of this consumer-facing AI. Another hurdle will be convincing consumers to use it.

Send a text or take a photo

Contamination rates remain stubbornly high, as recycling rates are also stagnant in many parts of the U.S.

In New York City, for example, non-recyclable contaminants in the metal, plastic and glass waste stream have gone from nearly 34 pounds to almost 49 pounds per household between 2017 and 2023. The Recycling Partnership estimated in 2020 that the inbound material contamination rate was 17%

Waste and recycling professionals hoping to improve these statistics are looking to AI as enabling coaching for residents and visitors on the spot. 

A rendering of the Oscar Sort AI camera sorting system for identifying if waste should go into landfill, recycling or compost bins.

A rendering of Oscar Sort’s AI-enable sorting technology, which uses cameras to help people decide where to place their items at airports, universities and other public areas.

Permission granted by Intuitive AI

 

Some public-facing AI sorting systems identify items the same way MRF technologies do. Oscar Sort, a platform offered by Intuitive AI, uses a camera to see what a user is holding. A follow-up message on a mounted screen shows which bin to drop it in — trash, recycling, compost, or even (as one client requested) a chopstick-only receptacle for a recycling service to turn into wooden furniture. The software can also offer multi-part instructions, like directing a coffee cup to the trash bin and a cardboard sleeve to the recycling bin.

In the U.S., facilities such as the San Francisco Ferry Building and Seattle-Tacoma International Airport are in the early stages of using Oscar on site. The company says over 25 airports and 50 universities asked to become customers in the last year.

Each Oscar installation costs $10,000 to $15,000, after which yearly services and data fees cost $5,500 to $6,500, said Brian Sano, head of strategy at Intuitive AI. In 2024, the company launched an ad-based offering — brands can pay to advertise on the embedded screens, generating potentially enough revenue for the facility to cover annual maintenance costs and then some.

Consumer-facing sorting AI can also take in what the public offers via text, which is how the city of Los Angeles opted for its technology to operate.

Professor Green, the AI educator built for the city by software company Hello Lamp Post, works like a chatbot. Users in one of four pilot neighborhoods — Porter Ranch, Northridge, South LA and Watts — scan a QR code or go to the city website to text Professor Green. The system replies to tell users if their item can go in a city-provided green bin, or their trash or recycling bins instead. The chatbot understands and replies to English, Spanish, Korean, Tagalog, Ethiopian and Armenian — languages selected to best serve populations in the pilot area.

Ideally, Professor Green clears up local confusion about composting rules. The City of Los Angeles has a lot of residents who move in to the municipality and might not be familiar with local guidelines, Perez said, or need help parsing the rules from those of nearby areas. For example, some of the other municipalities in Los Angeles County asks that residents line their bins with compostable bags, but the city of Los Angeles only accepts paper-product liners.

Too few trash photos

Both Intuitive AI and LA Sanitation staff dip into the data to make sure that their programs are identifying items correctly.

Intuitive AI does this for Oscar customers by comparing the instructions Oscar offered to the recordings of what the user was holding, every day, along with four waste audits a year for most customers.

LA Sanitation staff also comb through Professor Green requests every day. They spend at most an hour on the inquiries, which is often more than enough time to spot any issues. For example, staff taught the AI to respond to questions about tamale waste, but soon realized it didn’t know corn husks were part of the food. Perez said the team has since instructed Professor Green to associate the two.

Text-based sorting inquiries will likely give users more accurate instructions than camera-based technology, said Sharon Hsiao, an assistant professor of computer science and engineering at Santa Clara University. 

That’s because having people tell the AI what they’re trying to dispose of gets around the issue of having to train the software to recognize what trash looks like. Written prompts can still run a wide range. As Perez pointed out, her office had a long list of terms for “pet waste” that Professor Green needed to recognize. But visual identification has even more variability. 

“Cigarette butts or COVID masks — those are very distinct,” Hsiao said. “But if you crumple up paper, it won’t always be the same shape.” 

There are also few photos of trash that are freely available for AI models to learn from.

Hsiao and her colleagues tried to compile a database of these publicly accessible training images when building an AI waste sorter that relies on a smartphone camera. Not only were there too few images to train the software, but the computational power that would be needed for the AI to recognize any possible item goes beyond anything a smart phone can handle, Hsiao said.

An AI system focused on waste requires a small computer and is more likely to be accurate if developers can train the software to recognize a limited array of items, Hsiao said. Places that restrict the kinds of materials allowed inside, like entertainment venues or airports, could be a natural fit. 

Lower barriers are better

Experts expect technology to play an increasing role as local governments and facility operators look for ways to evolve their recycling education. But figuring out what will resonate with the public is an ongoing process.

AI-enabled camera systems might be easier for users, despite being harder to teach. In surveys, Hsiao and her team found that people are likely to think of texting or responding to voice prompts as more work than waiting for a camera to register what it’s seeing. And all of these options were considered significantly more time-consuming than the status quo —the split-second decision of throwing items into whichever bin it seems to belong.

“All the people that we surveyed wanted to be able to use their phone to scan items to tell them which bin it’s supposed to be in,” Hsiao said. “In reality, they actually don’t do it.” 

To convince people to take the time and use the AI identification they claim to want, Hsiao and her colleagues are working on a sorting system that also tells users about the benefits of their actions. Her team is experimenting with feedback about money or greenhouse gas emissions saved via correct sorting.

For now, the simpler text-based Professor Green option appears to be resonating. Los Angelenos have had over 4,000 conversations with the AI since its debut in March. Use has been particularly high in South LA, a low-income neighborhood where the majority of households speak Spanish at home.

LA Sanitation is now looking at expanding Professor Green to other neighborhoods in the future. But to know for certain whether the AI system is meaningfully changing contamination rates or diverting more material from landfills, Perez and her team would have to see the results of a more old-fashioned tool: A waste audit. 

This story first appeared in the Waste Dive: Recycling newsletter. Sign up for the weekly emails here.

 



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Google Cloud expects strong growth thanks to demand for AI

Published

on


Google Cloud CEO Thomas Kurian paints a rosy picture for the cloud service provider. During a Goldman Sachs technology conference in San Francisco, he said that the company has approximately $106 billion in contracts outstanding. According to him, more than half of that can be converted into revenue in the next two years.

In the second quarter of 2025, parent company Alphabet reported $13.6 billion in revenue for Google Cloud, an increase of 32 percent over the previous year. If the forecast is correct, according to The Register, this means that the cloud service provider could add around $53 billion in additional revenue by 2027.

Google Cloud’s market position is often compared to that of its biggest rivals. Microsoft reported annual revenue of $75 billion for Azure this year, while AWS recorded $30.9 billion in the same quarter, a growth of 17.5 percent.

Faster transition to the cloud

Kurian emphasized that many companies still run IT systems on-premises. He expects the transition to the cloud to accelerate, with artificial intelligence playing a decisive role. Increasingly, customers are looking for suppliers who can help transform their business operations with AI applications, rather than just hosting services.

Google claims to have an advantage in this regard thanks to its own investments in AI infrastructure. Its systems are said to be more energy-efficient and deliver more computing power than those of its competitors. According to Kurian, the storage and network are also designed in such a way that they can easily switch from training to inference.

For investors, the most important thing is how AI is converted into revenue. Kurian mentioned usage-based rates, subscriptions, and value-based models, such as paying per saved service request or higher ad conversions. In addition, AI use leads to increased purchases of security and data services.

According to Kurian, 65 percent of customers now use Google Cloud AI tools. On average, this group purchases more products than organizations that do not yet use AI. Examples of applications include digital product development, customer service, back-office processes, and IT support. For example, Google helped Warner Bros. re-edit The Wizard of Oz for the Las Vegas Sphere, and Home Depot uses AI to answer HR questions more quickly.

Kurian’s message: cloud infrastructure only becomes truly profitable when companies purchase AI services on top of it. With this, Google Cloud wants to position itself firmly in the next phase of the cloud market.



Source link

Continue Reading

Tools & Platforms

New AI Tool Predicts Treatments That Reverse Cell Disease

Published

on


In a move that could reshape drug discovery, researchers at Harvard Medical School have designed an artificial intelligence model capable of identifying treatments that reverse disease states in cells.

Unlike traditional approaches that typically test one protein target or drug at a time in hopes of identifying an effective treatment, the new model, called PDGrapher and available for free, focuses on multiple drivers of disease and identifies the genes most likely to revert diseased cells back to healthy function.

The tool also identifies the best single or combined targets for treatments that correct the disease process. The work, described Sept. 9 in Nature Biomedical Engineering, was supported in part by federal funding.

By zeroing in on the targets most likely to reverse disease, the new approach could speed up drug discovery and design and unlock therapies for conditions that have long eluded traditional methods, the researchers noted.

“Traditional drug discovery resembles tasting hundreds of prepared dishes to find one that happens to taste perfect,” said study senior author Marinka Zitnik, associate professor of biomedical informatics in the Blavatnik Institute at HMS. “PDGrapher works like a master chef who understands what they want the dish to be and exactly how to combine ingredients to achieve the desired flavor.”

The traditional drug-discovery approach — which focuses on activating or inhibiting a single protein — has succeeded with treatments such as kinase inhibitors, drugs that block certain proteins used by cancer cells to grow and divide. However, Zitnik noted, this discovery paradigm can fall short when diseases are fueled by the interplay of multiple signaling pathways and genes. For example, many breakthrough drugs discovered in recent decades — think immune checkpoint inhibitors and CAR T-cell therapies — work by targeting disease processes in cells.

The approach enabled by PDGrapher, Zitnik said, looks at the bigger picture to find compounds that can actually reverse signs of disease in cells, even if scientists don’t yet know exactly which molecules those compounds may be acting on.

How PDGrapher works: Mapping complex linkages and effects

PDGrapher is a type of artificial intelligence tool called a graph neural network. This tool doesn’t just look at individual data points but at the connections that exist between these data points and the effects they have on one another.

In the context of biology and drug discovery, this approach is used to map the relationship between various genes, proteins, and signaling pathways inside cells and predict the best combination of therapies that would correct the underlying dysfunction of a cell to restore healthy cell behavior. Instead of exhaustively testing compounds from large drug databases, the new model focuses on drug combinations that are most likely to reverse disease.

PDGrapher points to parts of the cell that might be driving disease. Next, it simulates what happens if these cellular parts were turned off or dialed down. The AI model then offers an answer as to whether a diseased cell would happen if certain targets were “hit.”

“Instead of testing every possible recipe, PDGrapher asks: ‘Which mix of ingredients will turn this bland or overly salty dish into a perfectly balanced meal?’” Zitnik said.

Advantages of the new model

The researchers trained the tool on a dataset of diseased cells before and after treatment so that it could figure out which genes to target to shift cells from a diseased state to a healthy one.

Next, they tested it on 19 datasets spanning 11 types of cancer, using both genetic and drug-based experiments, asking the tool to predict various treatment options for cell samples it had not seen before and for cancer types it had not encountered.

The tool accurately predicted drug targets already known to work but that were deliberately excluded during training to ensure the model did not simply recall the right answers. It also identified additional candidates supported by emerging evidence. The model also highlighted KDR (VEGFR2) as a target for non-small cell lung cancer, aligning with clinical evidence. It also identified TOP2A — an enzyme already targeted by approved chemotherapies — as a treatment target in certain tumors, adding to evidence from recent preclinical studies that TOP2A inhibition may be used to curb the spread of metastases in non-small cell lung cancer.

The model showed superior accuracy and efficiency, compared with other similar tools. In previously unseen datasets, it ranked the correct therapeutic targets up to 35 percent higher than other models did and delivered results up to 25 times faster than comparable AI approaches.

What this AI advance spells for the future of medicine

The new approach could optimize the way new drugs are designed, the researchers said. This is because instead of trying to predict how every possible change would affect a cell and then looking for a useful drug, PDGrapher right away seeks which specific targets can reverse a disease trait. This makes it faster to test ideas and lets researchers focus on fewer promising targets.

This tool could be especially useful for complex diseases fueled by multiple pathways, such as cancer, in which tumors can outsmart drugs that hit just one target. Because PDGrapher identifies multiple targets involved in a disease, it could help circumvent this problem.

Additionally, the researchers said that after careful testing to validate the model, it could one day be used to analyze a patient’s cellular profile and help design individualized treatment combinations.

Finally, because PDGrapher identifies cause-effect biological drivers of disease, it could help researchers understand why certain drug combinations work — offering new biological insights that could propel biomedical discovery even further.

The team is currently using this model to tackle brain diseases such as Parkinson’s and Alzheimer’s, looking at how cells behave in disease and spotting genes that could help restore them to health. The researchers are also collaborating with colleagues at the Center for XDP at Massachusetts General Hospital to identify new drug targets and map which genes or pairs of genes could be affected by treatments for X-linked Dystonia-Parkinsonism, a rare inherited neurodegenerative disorder.

“Our ultimate goal is to create a clear road map of possible ways to reverse disease at the cellular level,” Zitnik said.

Reference: Gonzalez G, Lin X, Herath I, Veselkov K, Bronstein M, Zitnik M. Combinatorial prediction of therapeutic perturbations using causally inspired neural networks. Nat Biomed Eng. 2025:1-18. doi: 10.1038/s41551-025-01481-x

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.



Source link

Continue Reading

Tools & Platforms

Driving the Way to Safer and Smarter Cars

Published

on


A new, scalable neural processing technology based on co-designed hardware and software IP for customized, heterogeneous SoCs.

As autonomous vehicles have only begun to appear on limited public roads, it has become clear that achieving widespread adoption will take longer than early predictions suggested. With Level 3 systems in place, the road ahead leads to full autonomy and Level 5 self-driving. However, it’s going to be a long climb. Much of the technology that got the industry to Level 3 will not scale in all the needed dimensions—performance, memory usage, interconnect, chip area, and power consumption.

This paper looks at the challenges waiting down the road, including increasing AI operations while decreasing power consumption in realizable solutions. It introduces a new, scalable neural processing technology based on co-designed hardware and software IP for customized, heterogeneous SoCs that can help solve them.

Read more here.



Source link

Continue Reading

Trending