Connect with us

Tools & Platforms

AWS is launching an AI agent marketplace next week with Anthropic as a partner

Published

on


Amazon Web Services (AWS) is launching an AI agent marketplace next week and Anthropic is one of its partners, TechCrunch has exclusively learned.

The AWS agent marketplace launch will take place at the AWS Summit in New York City on July 15, two people familiar with the development told TechCrunch. AWS and Anthropic did not respond to requests for comments.

AI agents are ubiquitous nowadays. And every single investor in Silicon Valley is bullish on startups building them — even if there is some disagreement on exactly what defines an AI agent. The term is somewhat ambiguous and is loosely used to describe computer programs that can make decisions and perform tasks independently, such as interacting with software, by using an AI model at the backend.

AI behemoths such as OpenAI and Anthropic are promoting it as the next big thing in tech. However, the distribution of AI agents poses a challenge, as most companies offer them in silos. AWS appears to be taking a step to address this with its new move.

The company’s dedicated agent marketplace will allow startups to directly offer their AI agents to AWS customers. The marketplace will also allow enterprise customers to browse, install, and look for AI agents based on their requirements from a single location, a source said.

That could give Anthropic — and other AWS agent marketplace partners — a considerable boost.

Anthropic, which already has Amazon’s backing and is reportedly in line for another multibillion-dollar investment from the e-commerce company, views AI’s future primarily in terms of agents — at least for the coming years. Anthropic builds AI agents in-house and enables developers to create them using its API.

AWS’ marketplace would help Anthropic reach more customers, including those who may already use AI agents from its rivals, such as OpenAI. Anthropic’s involvement in the marketplace could also attract more developers to use its API to create more agents, and eventually increase its revenues. The company already hit $3 billion in annualized revenue in late May.

Like any other online marketplace, AWS will take a cut of the revenue that startups earn from agent installations. However, this share will be minimal compared to the marketplace’s potential to unlock new revenue streams and attract customers.

The marketplace model will allow startups to charge customers for agents. The structure is similar to how a marketplace might price SaaS offerings rather than bundling them into broader services, one of the sources said.

Amazon is not the first tech giant to offer a marketplace for agents. In April, Google Cloud introduced an AI Agent Marketplace to help developers and businesses list, buy, and sell AI agents. Microsoft also introduced a similar offering, called Agent Store, within Microsoft 365 Copilot a month later. Similarly, enterprise software providers, including Salesforce and ServiceNow, have their own agent marketplaces.

That said, we have yet to see how successful these marketplaces are for smaller AI startups and enterprises seeking specific AI agents.



Source link

Tools & Platforms

AI technology drives sharp rise in synthetic abuse material

Published

on


New data reveals over 1,200 AI-generated abuse videos have been discovered so far in 2025, a significant rise from just two during the same period last year.

AI is increasingly being used to produce highly realistic synthetic abuse videos, raising alarm among regulators and industry bodies.

According to new data published by the Internet Watch Foundation (IWF), 1,286 individual AI-generated abuse videos were identified during the first half of 2025, compared to just two in the same period last year.

Instead of remaining crude or glitch-filled, such material now appears so lifelike that under UK law, it must be treated like authentic recordings.

More than 1,000 of the videos fell into Category A, the most serious classification involving depictions of extreme harm. The number of webpages hosting this type of content has also risen sharply.

Derek Ray-Hill, interim chief executive of the IWF, expressed concern that longer-form synthetic abuse films are now inevitable unless binding safeguards around AI development are introduced.

Safeguarding minister Jess Phillips described the figures as ‘utterly horrific’ and confirmed two new laws are being introduced to address both those creating this material and those providing tools or guidance on how to do so.

IWF analysts say video quality has advanced significantly instead of remaining basic or easy to detect. What once involved clumsy manipulation is now alarmingly convincing, complicating efforts to monitor and remove such content.

The IWF encourages the public to report concerning material and share the exact web page where it is located.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!



Source link

Continue Reading

Tools & Platforms

AI on the line: How AI is transforming vision inspection technologies

Published

on


In an era of tightening global regulations and rising consumer expectations, the F&B industry is increasingly turning to advanced vision inspection technologies. From spotting defects to ensuring compliance, these automated inspection tools are reshaping quality control, enhancing efficiency, reducing waste and boosting safety. FoodBev’s Siân Yates explores how cutting-edge technology is reshaping the industry, one perfectly inspected product at a time.

In the food and beverage industry, traditional quality inspection methods have always relied on human observation – an inherently inconsistent and flawed process. Automated vision inspection systems offer a transformative alternative. By detecting foreign objects, assessing product uniformity and ensuring that only items meeting strict quality criteria reach consumers, these systems significantly enhance operational efficiency and minimise errors.

“As the food industry moves towards more automation, applications are becoming increasingly complex, largely due to the variability in food products,” said Anthony Romeo, product manager at US-based vision solutions company Oxipital AI. This complexity stems from the need for automated systems to adapt to the wide range of textures, sizes and ingredients in food, making precise automation a key challenge.

Stephan Pottel, director of strategy at Zebra Technologies, highlighted the rising demand for intelligent automation: “There’s a growing need for machine vision and 3D solutions, powered by deep learning, to address more complex food and packaging use cases, along with vision-guided robotics for tasks like inspection, conveyor belt picking and sortation workflows”.

Key features of vision inspection

1. Defect detection

Vision inspection systems excel in identifying defects that may go unnoticed by human inspectors. These systems utilise high-resolution cameras and advanced algorithms to detect foreign objects, surface defects, and inconsistencies in size and shape. For example, in the fruit packing industry, vision systems can identify bruised or rotten fruit, ensuring only high-quality products are packaged and shipped.

2. Label verification

These technologies are increasingly used for label verification, ensuring compliance with regulatory standards. Systems can check for correct placement, legibility and adherence to labelling requirements, such as allergen information and expiration dates. Vision is usually deployed for label verification, rather than food surface defects, enhancing compliance and reducing the risk of costly recalls.

3. Product uniformity assessment

Maintaining product uniformity is crucial in the food and beverage sector. Vision inspection systems can assess visual aspects such as size, shape and colour. For instance, a snack manufacturer might use vision inspection to ensure that chips are uniformly shaped and coloured, meeting consumer expectations for quality and appearance.

4. Adaptive manufacturing

Advanced vision systems, particularly those incorporating AI and 3D technology, enable adaptive manufacturing processes. These systems can adjust production parameters in real time based on the visual data they collect. For example, in a bakery, vision systems can monitor the size and shape of pastries as they are produced, allowing adjustments to baking times or temperatures to ensure consistent quality.

Advancements in AI

Recent advancements in AI, automation and 3D technology have greatly enhanced machine vision systems, increasing accuracy and providing realistic visual sensing capabilities. 3D imaging technologies are being used to assess the shape and size of products, ensuring they meet packaging specifications. For instance, in the seafood industry, 3D scanners can evaluate the dimensions of fish fillets, ensuring they are cut to the correct size before packaging. This not only reduces waste but also ensures consistency in product offerings.

What is more, 3D profile sensors improve depth perception and refine quality control, making them indispensable tools in industrial automation. Oxipital AI’s Romeo highlighted the potential of these technologies: “Removing defects before they reach customers is a key first step where vision inspection technology plays a role, but there’s even more data to be leveraged”. By preventing defects from the outset, manufacturers can boost yield and reduce waste.

AI-powered vision inspection systems can also facilitate real-time monitoring of production lines, identifying potential issues before they escalate. This capability allows manufacturers to implement predictive maintenance, reducing downtime and improving overall efficiency.

Zebra Technologies

AI and food safety

Consumer safety remains a top priority in the food and beverage industry. AI plays a crucial role in monitoring and analysing processes in real time, helping manufacturers navigate the complexities of compliance with legal requirements and certification pressures from major retailers.

As Zebra Technologies’ Pottel explained: “AI is ideal for food and beverage products where classification, segmentation, and object and anomaly detection are essential. It is also enhancing asset and inventory visibility, which is crucial for predicting contamination risks and maintaining high safety standards throughout the supply chain.”

“Vision technologies can help check the presentation of food products…offering a quick, repeatable and reliable way to assess the visual aspects of food products like size, shape and colour,” added Neil Gruettner, market manager at Mettler-Toledo Product Inspection.

He continued: “Deployment of this type of AI provides context to support rule-based machine learning and improve human decision-making. It also gives inspection equipment the tools to extract and interpret as much data as possible out of a product, facilitating the evolution and refinement of production processes through the continuous exposure to vast datasets.”

AI-enhanced vision systems also guide robots in handling food products, particularly those that are delicate or irregularly shaped. “AI has proved to be a great method for tackling applications with a high frequency of naturally occurring organic variability, such as food,”Oxipital AI’s Romeo explained, adding that this adaptability ensures gentle and precise handling, particularly important when sorting fresh produce or packaging baked goods.

Fortress Technology uses AI to reduce contamination risks and identify defects. The company’s commercial manager, Jodie Curry, told FoodBev: “Streamlining processes reduces the risk of contamination and ensures consistent quality. Implementing automated technology and digital tools helps identify inefficiencies and boosts responsiveness.”

Fortress Technology

The role of combination inspection systems

The integration of multiple inspection technologies into single systems is another key trend in this space. These systems integrate various inspection technologies, such as X-ray, checkweighing and vision inspection, to provide a comprehensive assessment of food products. By combining these technologies, manufacturers can ensure higher quality control, better detection of defects and more efficient production lines. This trend allows for more accurate and reliable monitoring, helping to reduce waste, improve safety standards and enhance overall product quality.

For its part, Fortress offers combination systems that enable comprehensive and multi-layered inspection. The company is already leveraging its proprietary data software package, Contact 4.0, across its metal detection, X-ray and checkweighing technologies. Contact 4.0 allows processors to review and collect data, securely monitor and oversee the performance of multiple Fortress metal detectors, checkweighers or combination inspection machines connected to the same network.

Oxipital AI

Deep learning and quality control

Deep learning is revolutionising visual inspection by enabling machines to learn from data and recognise previously unseen variations of defect As Zebra Technologies’ Pottel explained: “Deep learning machine vision excels at complex visual inspections, especially where the range of anomalies, defects and spoilage can vary, as is often the case with food.

This technology is vital for automating inspections and ensuring quality. Deep learning optical character recognition (OCR) also improves packaging inspection by ensuring label quality, regulatory compliance and brand protection. It can verify label presence, confirm allergen accuracy and prevent mislabeling.

“The goal is to strengthen quality control by capturing an image and processing it against set quality control parameters,” Mettler-Toledo’s Gruettner pointed out.

Vision systems are increasingly deployed for label verification, ensuring compliance with legislative food labelling requirements. The Mettler-Toledo label inspection portfolio features Smart Camera systems (V11, V13, V15) for basic label inspections, including barcodes, alphanumeric text and label quality. For more advanced applications, the PC based V31 and V33 systems offer a larger field of view, faster throughput and enhanced inspection capabilities.

Oxipital AI uses 3D product scans and synthetic data generation to eliminate the need for hand-labelling images. “All training is done at Oxipital AI, enabling food and beverage customers to deploy AI without needing a team of experts,” said Romeo. “Our solutions are designed for immediate impact, requiring no coding, DIY or machine-learning expertise to implement and maintain.”

Mettler-Toledo

Real-world applications and future prospects

According to Zebra’s Global Manufacturing Vision Study, which surveyed leaders across various manufacturing sectors, including F&B, 66% of respondents plan to implement machine vision within the next five years, while 54% expect AI to drive growth by 2029.

These figures, coupled with the expanding market for vision inspection systems, suggest

that the majority of manufacturing leaders are prioritising the integration of these advanced technologies, seeing them as crucial tools for both immediate improvements and long-term growth.

This shift is partly driven by increasingly stringent government regulations, which demand more accurate labelling and packaging. Many companies are already successfully leveraging AI to enhance their operations, particularly in labelling processes.

Despite its clear advantages, the uptake of AI has been slow. The main barrier appears to be cost. While the initial integration can be expensive, AI has demonstrated significant long-term cost savings, making it a worthwhile investment over time.

Zebra’s studies have shown that the pressure to maintain quality while managing fewer resources is intensifying for manufacturers. As a result, cost remains a significant consideration when implementing AI solutions.

Fortress recommends consolidating AI systems into a single interface, which helps reduce costs in the long term. Curry told FoodBev: “The future of our food supply chain depends on advanced inspection systems that enhance food safety, reduce product waste and require minimal factory floor space”.

She continued: “Combination systems offer the benefit of space efficiency, as all sales, services, parts and technical support are handled by one provider. A single interface simplifies training, improves operational safety and drives cost savings through faster installation and reduced training time.”

As AI continues to evolve, its role in vision and inspection is set to expand. Advancements in machine learning, sensor technology and robotics will lead to even more sophisticated and efficient inspection systems, raising quality and safety standards for consumers worldwide.



Source link

Continue Reading

Tools & Platforms

AI Flow by TeleAI Recognized as a Breakthrough Framework for AI Deployment and Distribution by Omdia

Published

on

By


SHANGHAI, July 11, 2025 /PRNewswire/ — AI Flow, the innovative framework developed by TeleAI, the Institute of Artificial Intelligence of China Telecom, has been recognized as a key role in the intelligent transformation of telecom infrastructure and services in the latest report by Omdia, a premier technology research and advisory firm. The report highlights AI Flow’s exceptional capabilities in addressing the edge GenAI implementation challenges, showcasing its device-edge-cloud computing architecture that optimizes both performance and efficiency as well as its groundbreaking combination of information and communication technologies.

According to the report, AI Flow facilitates seamless intelligence flow, allowing device-level agents to overcome the limitations of a single device and achieve enhanced functionality. The same communication network can connect advanced LLMs, VLMs, and diffusion models across heterogeneous nodes. By facilitating real-time, synergistic integration and dynamic interaction among these models, the approach achieves emergent intelligence that exceeds the capabilities of any individual model.

Lian Jye Su, Chief Analyst at Omdia, remarked that AI Flow has demonstrated sophisticated approaches to facilitate efficient collaboration across device-edge-cloud tiers and to achieve emergent intelligence through connective and interactive model operations.

The unveiling of AI Flow has also drawn great attention from the AI community on global social media. AI industry observer EyeingAI said on X “It’s a grounded, realistic take on where AI could be headed. ” AI tech influencer Parul Gautam said on X that AI Flow is pushing AI boundaries and ready to shape the future of intelligent connectivity.

Fulfill the Vision of Ubiquitous Intelligence in Future Communication Networks

AI Flow, under the leadership of Professer Xuelong Li, the CTO and Chief Scientist of China Telecom and Director of TeleAI, is introduced to address the significant challenges of the deployment of emerging AI applications posed by hardware resource limitations and communication network constraints, enhancing the scalability, responsiveness, and sustainability of real world AI systems. It is a multidisciplinary framework designed to enable seamless transmission and emergence of intelligence across hierarchical network architectures by leveraging inter-agent connections and human-agent interactions. At its core, AI Flow emphasizes three key points:

Device-Edge-Cloud Collaboration: AI Flow leverages a unified device-edge-cloud architecture, integrating end devices, edge servers, and cloud clusters, to dynamically optimize scalability and enable low-latency inference of AI models. By developing efficient collaboration paradigms tailored for the hierarchical network architecture, the system minimizes communication bottlenecks and streamlines inference execution.

Familial Models: Familial models refer to a set of multi-scale architectures designed to address diverse tasks and resource constraints within the AI Flow framework. These models facilitate seamless knowledge transfer and collaborative intelligence across the system through their interconnected capabilities. Notably, the familial models are feature-aligned, which allows efficient information sharing without the need for additional middleware. Furthermore, through well-structured collaborative design, deploying familial models over the hierarchical network can achieve enhanced inference efficiency under constrained communication bandwidth and computational resources.

Connectivity- and Interaction-based Intelligence Emergence: AI Flow introduces a paradigm shift to facilitate collaborations among advanced AI models, e.g., LLMs, vision-language models (VLMs), and diffusion models, thereby stimulating emergent intelligence surpassing the capability of any single model. In this framework, the synergistic integration of efficient collaboration and dynamic interaction among models becomes a key boost to the capabilities of AI models.

See AI Flow’s tech articles here:

https://www.arxiv.org/abs/2506.12479

https://ieeexplore.ieee.org/document/10884554 

AI Flow’s First Move: AI-Flow-Ruyi Familial Model

Notably, TeleAI has just open-sourced the first version of AI Flow’s familial model: AI-Flow-Ruyi-7B-Preview last week on GitHhub.

The model is designed for the next-generation device-edge-cloud model service architecture. Its core innovation lies in the shared intermediate features across models of varying scales, enabling the system to generate response with a subset of parameters based on problem complexity through an early-exit mechanism. Each branch can operate independently while leveraging their shared stem network for computation reduction and seamless switching. Combined with distributed device-edge-cloud deployment, it achieves collaborative inference among large and small models within the family, enhancing the efficiency of distributed model inference.

Open-source address

https://github.com/TeleAI-AI-Flow/AI-Flow-Ruyi 

About TeleAI

TeleAI, the Institute of Artificial Intelligence of China Telecom, is a pioneering team of AI scientists and enthusiasts, working to create breakthrough AI technologies that could build up  the next generation of ubiquitous intelligence and improve people’s wellbeing. Under the leadership of Professor Xuelong Li, the CTO and Chief Scientist of China Telecom, TeleAI aims to continuously expand the limits of human cognition and activities, by expediting research on AI governance, AI Flow, Intelligent Optoelectronics (with an emphasis on embodied AI), and AI Agents.

For more information:

https://www.teleai.com.cn/product/AboutTeleAI

Photo – https://mma.prnewswire.com/media/2729356/AI_Flow.jpg



Source link

Continue Reading

Trending