Connect with us

AI Research

Global CPG Companies Join Generative and Agentic AI Rush

Published

on


Consumer packaged goods companies are accelerating the adoption of artificial intelligence in their operations, marketing and supply chains as they seek new ways to boost growth and efficiency in a mature and competitive industry.

In May, PepsiCo announced a collaboration with Amazon Web Services to enhance its in-house generative AI platform, PepGenX. The partnership gives PepGenX access to various multimodal and agentic AI models on AWS.

“This strategic collaboration will strengthen our mature cloud strategy and unlock new levels of agility, intelligence and scalability across the company,” Athina Kanioura, chief strategy and transformation officer at PepsiCo, said in a statement.

The partnership spans PepsiCo’s lines of business globally. The changes include the following:

  • Moving applications and workloads to the cloud.
  • Giving in-house developers access to different multimodal AI models and agentic AI capabilities to enhance PepGenX, via AWS.
  • Enabling insights into real-time advertising performance, audience segmentation, hyper-personalized content and targeted marketing capabilities across Amazon’s customers.
  • Collaborating to transform digital supply chain capabilities, including predictive maintenance for manufacturing and logistics.

On the heels of this alliance, PepsiCo announced last month that it would deploy Salesforce’s Agentforce AI agents to manage “key functions,” enhance customer support and operational efficiency, and empower the sales team to focus on growth and deeper client engagement.

“Embracing an AI-first world means reimagining an enterprise where humans and intelligent agents don’t just coexist, they collaborate,” Kanioura said in a statement.

Humans and AI agents will be able to work together to respond faster to customer service inquiries, enable more targeted and automated marketing campaigns and promotions, and more.

In April, at Nvidia’s GTC conference, Pepsico showcased a digital twin of a warehouse using AI to simulate and optimize operations. The model incorporates generative AI and computer vision to test scenarios before deploying changes to physical facilities.

The June PYMNTS Intelligence report “AI at the Crossroads: Agentic Ambitions Meet Operational Realities” found that virtually every large organization is embracing generative AI to enhance productivity, streamline decision making and drive innovation. They are also using generative AI to improve the services and goods they offer to customers.

However, the next iteration — AI agents that autonomously perform tasks — is giving chief operating officers pause, according to the report. More than half of COOs are concerned about the accuracy of AI-generated outputs. Even narrow tasks like coding still require at least some human oversight.

See also: CPG Marketing Embraces New Business Models for Digital Transformation

Unilever, Nestlé and Coca-Cola Jump In

Unilever, the maker of Dove, Knorr, Ben & Jerry’s and more, has several AI initiatives. One of the more recent developments is the creation of digital twins of its products to add depth to their images, slated for ads.

Using Real-Time 3D, Nvidia Omniverse and OpenUSD, these 3D replicas add a “level of realism” the company has never achieved before, helping the products stand out in a sea of ads, Unilver said.

Unilever’s creative staff can also use a single product shot to change wording, languages, backgrounds, formats and other variants quickly for different channels such as TV, digital commerce and the like.

“Our product twins can be deployed everywhere and anywhere, accurately and consistently, so content is generated faster and on brand,” Unilever Chief Growth and Marketing Officer Esi Eggleston Bracey said in a statement. “We call it creativity at the speed of life.”

The use of digital twins not only cuts costs but enables Unilever to bring products to market faster, the company said.

For example, its beauty and wellbeing brands were the first to use digital twins, and the company is now expanding the tech to include TRESemmé, Dove, Vaseline and Clear.

Unilever said it is seeing 55% in savings and a 65% faster turnaround in content creation. These images also elicit higher engagement with customers, holding their attention three times longer than traditional images, and doubling their click-through rates.

In another use of AI, Unilever can gather insights across its global operations to do forecasting and inform channel strategy.

For example, advanced modeling powered by AI can help sales representatives predict what a retailer is likely to buy. As such, sales teams can now personalize their engagement strategies, customize their loyalty programs and plan more targeted promotions.

Using AI and image processing, photos of in-store displays become a key data source for sales teams. They can get insights into stock levels to better advise retailers on product placement and merchandising.

Other CPG firms are following suit. In June, Nestlé also launched digital twins of its products for marketing purposes. These 3D virtual replicas let creative teams revise product packaging, change backgrounds and make other changes to adapt to local markets.

“This means that new creative content can be generated using AI, without having to constantly reshoot from scratch,” according to a company blog post.

As such, Nestlé can respond quicker in a fast-moving digital environment where ad campaigns on social media often require six or more different ad formats to be successful and product packaging changes constantly.

The company worked with Accenture, Nvidia and Microsoft on the initiative.

This month, Nestlé said its R&D team is working with IBM to invent a new generative AI tool that can find new types of packaging materials. Nestlé said it is moving away from the use of virgin plastic toward alternative materials such as recyclable and paper-based packaging.

Nestlé wants to find packaging materials that not only protect its content but also are cost-effective and recyclable.

The Coca-Cola Company is also actively using AI. In May, the company announced a partnership with Adobe to embed AI in design at scale. Project Fizzion, a design intelligence system, learns from designers and encodes their creative intent to automatically apply brand rules across formats, platforms and markets.

This encoded intent, StyleID, acts as a real-time guide to Coca-Cola teams and creative partners to generate hundreds of localized ad campaign versions for faster execution.

However, Coca-Cola has had an early misstep in using AI. Last year, consumers criticized its AI-generated Christmas promotion video as “soulless” and “devoid of any actual creativity,” according to NBC News.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read more:



Source link

AI Research

E-research library with AI tools to assist lawyers | Delhi News

Published

on


New Delhi: In an attempt to integrate legal work in courts with artificial intelligence, Bar Council of Delhi (BCD) has opened a one-of-its-kind e-research library at the Rouse Avenue courts. Inaugurated on July 5 by law minister Kapil Mishra, the library has various software to assist lawyers in their legal work. With initial funding of Rs 20 lakh, BCD functionaries told TOI that they are also planning the expansion of the library to be accessed from anywhere.Named after former BCD chairman BS Sherawat, the library boasts an integrated system, including the legal research platform SCC Online, the legal research online database Manupatra, and an AI platform, Lucio, along with several e-books on law across 15 desktops.Advocate Neeraj, president of Central Delhi Bar Court Association, told TOI, “The vision behind this initiative is to help law practitioners in their research. Lawyers are the officers of the honourable court who assist the judicial officer to reach a verdict in cases. This library will help lawyers in their legal work. Keeping that in mind, considering a request by our association, BCD provided us with funds and resources.”The library, which runs from 9:30 am to 5:30 pm, aims to develop a mechanism with the help of the evolution of technology to allow access from anywhere in the country. “We are thinking along those lines too. It will be good if a lawyer needs some research on some law point and can access the AI tools from anywhere; she will be able to upgrade herself immediately to assist the court and present her case more efficiently,” added Neeraj.Staffed with one technical person and a superintendent, the facility will incur around Rs 1 lakh per month to remain functional.With pendency in Delhi district courts now running over 15.3 lakh cases, AI tools can help law practitioners as well as the courts. Advocate Vikas Tripathi, vice-president of Central Delhi Court Bar Association, said, “Imagine AI tools which can give you relevant references, cite related judgments, and even prepare a case if provided with proper inputs. The AI tools have immense potential.”In July 2024, ‘Adalat AI’ was inaugurated in Delhi’s district courts. This AI-driven speech recognition software is designed to assist court stenographers in transcribing witness examinations and orders dictated by judges to applications designed to streamline workflow. This tool automates many processes. A judicial officer has to log in, press a few buttons, and speak out their observations, which are automatically transcribed, including the legal language. The order is automatically prepared.The then Delhi High Court Chief Justice, now SC Judge Manmohan, said, “The biggest problem I see judges facing is that there is a large demand for stenographers, but there’s not a large pool available. I think this app will solve that problem to a large extent. It will ensure that a large pool of stenographers will become available for other purposes.” At present, the application is being used in at least eight states, including Kerala, Karnataka, Andhra Pradesh, Delhi, Bihar, Odisha, Haryana and Punjab.





Source link

Continue Reading

AI Research

Enterprises will strengthen networks to take on AI, survey finds

Published

on


  • Private data centers: 29.5%
  • Traditional public cloud: 35.4%
  • GPU as a service specialists: 18.5%
  • Edge compute: 16.6%

“There is little variation from training to inference, but the general pattern is workloads are concentrated a bit in traditional public cloud and then hyperscalers have significant presence in private data centers,” McGillicuddy explained. “There is emerging interest around deploying AI workloads at the corporate edge and edge compute environments as well, which allows them to have workloads residing closer to edge data in the enterprise, which helps them combat latency issues and things like that. The big key takeaway here is that the typical enterprise is going to need to make sure that its data center network is ready to support AI workloads.”

AI networking challenges

The popularity of AI doesn’t remove some of the business and technical concerns that the technology brings to enterprise leaders.

According to the EMA survey, business concerns include security risk (39%), cost/budget (33%), rapid technology evolution (33%), and networking team skills gaps (29%). Respondents also indicated several concerns around both data center networking issues and WAN issues. Concerns related to data center networking included:

  • Integration between AI network and legacy networks: 43%
  • Bandwidth demand: 41%
  • Coordinating traffic flows of synchronized AI workloads: 38%
  • Latency: 36%

WAN issues respondents shared included:

  • Complexity of workload distribution across sites: 42%
  • Latency between workloads and data at WAN edge: 39%
  • Complexity of traffic prioritization: 36%
  • Network congestion: 33%

“It’s really not cheap to make your network AI ready,” McGillicuddy stated. “You might need to invest in a lot of new switches and you might need to upgrade your WAN or switch vendors. You might need to make some changes to your underlay around what kind of connectivity your AI traffic is going over.”

Enterprise leaders intend to invest in infrastructure to support their AI workloads and strategies. According to EMA, planned infrastructure investments include high-speed Ethernet (800 GbE) for 75% of respondents, hyperconverged infrastructure for 56% of those polled, and SmartNICs/DPUs for 45% of surveyed network professionals.



Source link

Continue Reading

AI Research

Amazon Web Services builds heat exchanger to cool Nvidia GPUs for AI

Published

on


The letters AI, which stands for “artificial intelligence,” stand at the Amazon Web Services booth at the Hannover Messe industrial trade fair in Hannover, Germany, on March 31, 2025.

Julian Stratenschulte | Picture Alliance | Getty Images

Amazon said Wednesday that its cloud division has developed hardware to cool down next-generation Nvidia graphics processing units that are used for artificial intelligence workloads.

Nvidia’s GPUs, which have powered the generative AI boom, require massive amounts of energy. That means companies using the processors need additional equipment to cool them down.

Amazon considered erecting data centers that could accommodate widespread liquid cooling to make the most of these power-hungry Nvidia GPUs. But that process would have taken too long, and commercially available equipment wouldn’t have worked, Dave Brown, vice president of compute and machine learning services at Amazon Web Services, said in a video posted to YouTube.

“They would take up too much data center floor space or increase water usage substantially,” Brown said. “And while some of these solutions could work for lower volumes at other providers, they simply wouldn’t be enough liquid-cooling capacity to support our scale.”

Rather, Amazon engineers conceived of the In-Row Heat Exchanger, or IRHX, that can be plugged into existing and new data centers. More traditional air cooling was sufficient for previous generations of Nvidia chips.

Customers can now access the AWS service as computing instances that go by the name P6e, Brown wrote in a blog post. The new systems accompany Nvidia’s design for dense computing power. Nvidia’s GB200 NVL72 packs a single rack with 72 Nvidia Blackwell GPUs that are wired together to train and run large AI models.

Computing clusters based on Nvidia’s GB200 NVL72 have previously been available through Microsoft or CoreWeave. AWS is the world’s largest supplier of cloud infrastructure.

Amazon has rolled out its own infrastructure hardware in the past. The company has custom chips for general-purpose computing and for AI, and designed its own storage servers and networking routers. In running homegrown hardware, Amazon depends less on third-party suppliers, which can benefit the company’s bottom line. In the first quarter, AWS delivered the widest operating margin since at least 2014, and the unit is responsible for most of Amazon’s net income.

Microsoft, the second largest cloud provider, has followed Amazon’s lead and made strides in chip development. In 2023, the company designed its own systems called Sidekicks to cool the Maia AI chips it developed.

WATCH: AWS announces latest CPU chip, will deliver record networking speed



Source link

Continue Reading

Trending