Tools & Platforms
Why Coca-Cola CIO Neeraj Tolmare prioritizes big-impact AI pilot projects
Neeraj Tolmare, the chief information officer at Coca-Cola, says the artificial intelligence pilots that he invests in only get authorized if the beverage giant can envision revenue generation or big efficiency gains once fully in production.
This strategy reflects a mandate from Coke’s CEO James Quincey and the broader leadership team that demands that AI experimentation “has to be tied to a real outcome that moves the needle for us,” says Tolmare. “We are all about scale.”
That scale is massive for Coca-Cola, which is ranked 97th on the Fortune 500. Each AI bet by Tolmare has to weigh cost of implementation, as well as how the technology may be used internally across various functions ranging from software development to sales. On top of that, he must weigh how new technology advancements like agentic AI will change workflows and how data is shared within Coke and close key external partners, including the 200 bottlers and 950 production facilities that make up the entire system that Coke relies on to serve 2.2 billion beverages each day.
One pilot that Tolmare is particularly excited about is the development of an algorithm that Coke developed to help retail outlets better predict demand, using AI to ensure that their shelves are always appropriately stocked.
In the past, sales agents would visit stores every few weeks and at times, find themselves facing half-empty coolers. Coke had to rely on historical retail scan data to try to anticipate future demand. But with AI, the company was able to create an algorithm that triangulates historical data with weather patterns, which is an important consideration for beverage purchases, and geolocation data from Google to more precisely predict future sales.
Those AI insights inform the messages Coca-Cola sends via WhatsApp to managers, advising them on when to stock more Sprite or Diet Coke before they run out. The pilot was tested in three countries and saw sales increase by 7% to 8% versus outlets that weren’t using the AI algorithm, according to Tolmare, who intends to broaden the application of this AI tool to more markets globally.
Another investment with big reach is Coke’s usage of AI for content creation. The beverage giant sells in more than 180 countries and creates marketing assets in over 130 languages. It’s a process that’s both time consuming and costly, because Coke has to adjust the materials to reflect not only language, but also cultural sensibilities. Coke recently created 20 AI-generated assets, all based on the company’s own proprietary intellectual property, and then created 10,000 variations to be used across many different languages and geographies.
Tolmare asserts that consumers are 20% more likely to engage with this content than prior iterations that weren’t crafted by AI, and that the AI-produced content was three times faster to generate.
In both cases, he stresses the importance of keeping humans in the loop. “AI has proven that it can unlock value that a human being would not be able to unlock because the computation needed to mine through all this data and bring something meaningful is so complex,” says Tolmare, in regards to the algorithm to support retail sales.
With marketing, AI’s application is a bit trickier and Coke is still figuring out the right approach. An AI-generated Christmas spot generated some backlash last year and in April, a different campaign made a mistake featuring a quote from a book that novelist J.G. Ballard never wrote. Tolmare says Coke has strict guidelines in place to ensure that AI-generated content avoids social biases that would be harmful to consumers and is keenly aware of hallucinations or deepfakes that can occur when generating creative content with AI tools.
“That’s not to say that we won’t ever fall into it,” admits Tolmare. “But we have a mechanism in place that allows us to react in a responsible way if and when that happens.”
Most of Tolmare’s career has been in the technology industry, including working for gadget manufacturer Palm, which ultimately lost out on an effort to compete with Apple in the smartphone market. He also previously served as a director at networking-equipment company Cisco Systems and as a VP at HP during the years in which it split up the personal-computer and printer businesses from the corporate hardware and services arm.
“I’ve spent more than two decades working for companies that develop and manufacture technology that the rest of the world uses,” says Tolmare. He joined Coca-Cola in 2018 to work at a company that consumes technology, rather than developing it, to grow their business.
Tolmare spearheaded the company’s all-in bet on cloud computing, retiring or selling all of Coke’s physical data centers. Roughly 80% of the company’s footprint is in Microsoft Azure with the rest split between Amazon Web Services and Google Cloud. “We continuously monitor which one is going to give us better efficiency by moving a workload from one cloud to another,” says Tolmare.
His hybrid cloud playbook is also applicable to how Coke thinks about generative AI experimentation. Coke works closely with Microsoft and its strategic partner OpenAI, but also taps other AI hyperscalers including Google, Meta, and Anthropic. “We don’t want to paint ourselves into a corner too soon,” he says. “The market itself is not consolidating.”
What’s next on deck is further exploration with agentic AI, systems that are designed to perform tasks autonomously or with little human intervention. The company is considering offerings from vendors including Microsoft, SAP, and Adobe, while developing its own AI agents trained on Coke’s data. All of these AI agents remain in the pilot phase, because Tolmare still wants to determine if these systems can be built at a cost effective price, while achieving the optimal business outcomes.
“We haven’t launched agentic in production yet, but we are very close,” says Tolmare. “I’m fascinated by what this can do for our business.”
John Kell
Send thoughts or suggestions to CIO Intelligence here.
NEWS PACKETS
Tech giants offer more discounts to the federal government. The Wall Street Journal reports that Oracle is cutting the cost of the company’s database software and cloud-computing service for the federal government. The discount on cloud infrastructure is the first of its kind for the entire government and could be replicated with other cloud providers in negotiations that are currently underway. Other recent price discounts that the General Services Administration were able to extract from vendors include a deal cut with Salesforce to trim the price of messaging app Slack by 90% through the end of November and lower costs from Google and Adobe. The GSA has opted to negotiate these savings through direct engagement with the tech giants, rather than discussing pricing with third parties. “Through procurement consolidation, we’re aiming to bring the leverage of the whole, commanding purchasing power of the federal wallet to these [technology providers] to get the best discounts for the taxpayers,” Federal Acquisition Service Commissioner Josh Gruenbaum told WSJ.
Ford CEO predicts AI will replace half of U.S. white-collar workers. CEOs have increasingly shared their dour prognostication of AI’s impact on the future of work and the latest diagnosis came from Jim Farley, the CEO of automaker Ford. Speaking at the Aspen Ideas Festival, Farley was making the case that the education system is too focused on pushing students toward a four-year college degree, which may be counterintuitive, given his view that “AI is gonna replace literally half of all white-collar workers in the U.S.” Farley makes a pitch for more investment in trade workers, those who may take a job on an auto factory floor or in construction, where there is a shortage of workers. He may have a point, as tech giants continue to slice jobs, with Microsoft earlier this month cutting another 9,000 jobs, bringing the grand total of jobs eliminated by the tech giant to 15,000 this year.
But, IT unemployment dropped to its lowest level this year in June. Much of the consternation about AI’s impact on work has been focused on tech jobs, especially as big employers like Microsoft and TikTok continue to generate national headlines when announcing layoffs. That said, across all sectors, businesses are on the hunt for talent and have been hiring for software developers and engineers, systems engineers and architects, and cybersecurity professionals, according to CIO Dive, which reported on insights from CompTIA. The IT trade organization reported that IT unemployment fell to 2.8% in June, as companies across all sectors added 90,000 net new tech jobs. Active employer job listings for tech positions reached 455,341 in June, with 47% of that total newly added in June. But the tech industry itself continues to be a weak point, as that sector reduced staffing by a net 7,256 positions in June.
Google parent company’s AI-powered drug firm gearing up for clinical trials. Alphabet’s drug discovery arm, Isomorphic Labs, is getting close to testing AI-designed drugs in humans, Fortune reports, as company president Colin Murdoch says “the next big milestone is actually going to clinical trials, starting to put these things into human beings. We’re staffing up now. We’re getting very close.” Isomorphic Labs, which was spun out of Google’s AI research lab DeepMind in 2021, raised $600 million in April to help propel its goal of combining the expertise of machine learning researchers and pharma veterans to design new treatments at a faster pace, more cheaply, and with a higher rate of success. Isomorphic supports existing pharma drug discovery programs, having already inked research collaborations with Novartis and Eli Lilly, and also develops its own internal drug candidates in areas such as oncology and immunology.
ADOPTION CURVE
Why CIOs need to focus on both data quality and permission. A new survey found that while 52% of companies are “highly reliant” on the consented data that’s used to craft personalized marketing materials and product development, there are still roadblocks as it pertains to how enterprises use this data. That’s important to get right if enterprises want to retain trust with the consumers that authorize the use of their data, while also adhering to state laws in places like California, which has given individuals greater control over their personal information.
Some big issues that CIOs face related to consent requirements include a lack of visibility into data flows (61%), followed closely by difficulty tracking user preferences across channels (59%). They’ll need to solve these problems quickly as 100% of the 265 survey respondents say that consented consumer data is foundational to their ability to succeed when it comes to pursuing AI initiatives.
Kate Parker, CEO of Transcend, the data privacy management software provider that funded the study, tells Fortune that the findings highlight a shift in thinking that’s less focused on data quality and more about permission to use the consumer data in the first place.
“If CIOs and digital leaders are in agreement that the consent of the data is the fundamental component of AI and personalization, but they also admit very openly that they do not have the infrastructure to solve that in scale, that’s going to be one of the biggest growth blockers,” says Parker.
JOBS RADAR
Hiring:
– H&H is seeking a CIO, based in New York City. Posted salary range: $200K-$250K/year.
– Lease & LaBau is seeking a CIO, based in New York City. Posted salary range: $450K-$500K/year.
– Vibrant Emotional Health is seeking a CTO in a remote-based role. Posted salary range: $261K-$350K/year.
– Major League Soccer is seeking a chief information security officer, based in New York City. Posted salary range: $200K-$275K/year.
Hired:
– Old National Bancorp named Matt Keen as CIO, taking on the C-suite leadership role after the Minneapolis-based bank acquired Bremer Bank, where Keen had served in the same role. Previously, Keen also served as a consultant for American Express as part of his tenure at PriceWaterhouseCoopers. He also previously served as CTO at real estate investment trust Two Harbors Investment Corp.
– EProductivity Software Packaging announced the appointment of Scott Brown as CIO, joining the software provider for the packaging and print industries to oversee an expansion of the company’s cloud portfolio, as well as enhance security and compliance across all platforms. Brown joins from software provider Sciforma, where he served as CIO.
– Rancher Government Solutions promoted Adam Toy to the role of CTO, where he will steer the strategic direction of the company’s tech portfolio, overseeing platform engineering, architecture, and product development. Toy had served in that role on an interim basis since March and initially joined Rancher, which helps government agencies modernize their IT infrastructure, in 2020 as a senior solutions architect.
Tools & Platforms
Polimorphic Raises $18.6M as It Beefs Up Public-Sector AI
The latest best on public-sector AI involves Polimorphic, which has raised $18.6 million in a Series A funding round led by General Catalyst.
The round also included M13 and Shine.
The company raised $5.6 million in a seed round in late 2023.
New York-based Polimorphic sells such products as artificial intelligence-backed chatbots and search tools, voice AI for calls, constituent relationship management (CRM) and workflow software, and permitting and licensing tech.
The new capital will go toward tripling the company’s sales and engineering staff and building more AI product features.
For instance, that includes the continued development of the voice AI offering, which can now work with live data — a bonus when it comes to utility billing — and even informs callers to animal services which pets might be up for adoption, CEO and co-founder Parth Shah told Government Technology in describing his vision for such tech.
The company also wants to bring more AI to CRM and workflow software to help catch errors on applications and other paperwork earlier than before, Shah said.
“We are more than just a chatbot,” he said.
Challenges of public-sector AI include making sure that public agencies truly understand the technology and are “not just slapping AI on what you already do,” Shah said.
As he sees it, working in governments in such a way has helped Polimorphic to nearly double its customer count every six months. The company has more than 200 public-sector departments at the city, county and state levels using the company’s products, he said — and such growth is among the reasons the company attracted this new round of investment.
The company’s general sales pitch is increasingly familiar to public-sector tech buyers: Software and AI can help agencies deal with “repetitive, manual tasks, including answering the same questions by phone and email,” according to a statement, and help people find civic and bureaucratic information more quickly.
For instance, the company says it has helped customers reduce voicemails by up to 90 percent, with walk-in requests cut by 75 percent. Polimorphic clients include the city of Pacifica, Calif.; Tooele County, Utah; Polk County, N.C.; and the town of Palm Beach, Fla.
The fresh funding also will help the company expand in the company’s top markets, which include Wisconsin, New Jersey, North Carolina, Texas, Florida and California.
The company’s investors are familiar to the gov tech industry. Earlier this year, for example, General Catalyst led an $80 million Series C funding round for Prepared, a public safety tech supplier focused on bringing more assistive AI capabilities to emergency dispatch.
“Polimorphic has the potential to become the next modern system of record for local and state government. Historically, it’s been difficult to drive adoption of these foundational platforms beyond traditional ERP and accounting in the public sector,” said Sreyas Misra, partner at General Catalyst, in the statement. “AI is the jet fuel that accelerates this adoption.”
Tools & Platforms
AI enters the classroom as law schools prep students for a tech-driven practice
When it comes to using artificial intelligence in legal education and beyond, the key is thoughtful integration.
“Think of it like a sandwich,” said Dyane O’Leary, professor at Suffolk University Law School. “The student must be the bread on both sides. What the student puts in, and how the output is assessed, matters more than the tool in the middle.”
Suffolk Law is taking a forward-thinking approach to integrating generative AI into legal education starting with requiring an AI course for all first-year students to equip them to use AI, understand it and critique it as future lawyers.
O’Leary, a long-time advocate for legal technology, said there is a need to balance foundational skills with exposure to cutting-edge tools.
“Some schools are ignoring both ends of the AI sandwich,” she said. “Others don’t have the resources to do much at the upper level.”
One major initiative at Suffolk Law is the partnership with Hotshot, a video-based learning platform used by top law firms, corporate lawyers and litigators.
“The Hotshot content is a series of asynchronous modules tailored for 1Ls,” O’Leary said, “The goal is not for our students to become tech experts but to understand the usage and implication of AI in the legal profession.”
The Hotshot material provides a practical introduction to large language models, explains why generative AI differs from tools students are used to, and uses real-world examples from industry professionals to build credibility and interest.
This structured introduction lays the groundwork for more interactive classroom work when students begin editing and analyzing AI-generated legal content. Students will explore where the tool succeeded, where it failed and why.
“We teach students to think critically,” O’Leary said. “There needs to be an understanding of why AI missed a counterargument or produced a junk rule paragraph.”
These exercises help students learn that AI can support brainstorming and outlining but isn’t yet reliable for final drafting or legal analysis.
Suffolk Law is one of several law schools finding creative ways to bring AI into the classroom — without losing sight of the basics. Whether it’s through required 1L courses, hands-on tools or new certificate programs, the goal is to help students think critically and stay ready for what’s next.
Proactive online learning
Case Western Reserve University School of Law has also taken a proactive step to ensure that all its students are equipped to meet the challenge. In partnership with Wickard.ai, the school recently launched a comprehensive AI training program, making it a mandatory component for the entire first-year class.
“We knew AI was going to change things in legal education and in lawyering,” said Jennifer Cupar, professor of lawyering skills and director of the school’s Legal Writing, Leadership, Experiential Learning, Advocacy, and Professionalism program. “By working with Wickard.ai, we were able to offer training to the entire 1L class and extend the opportunity to the rest of the law school community.”
The program included pre-class assignments, live instruction, guest speakers and hands-on exercises. Students practiced crafting prompts and experimenting with various AI platforms. The goal was to familiarize students with tools such as ChatGPT and encourage a thoughtful, critical approach to their use in legal settings.
Oliver Roberts, CEO and co-founder of Wickard.ai, led the sessions and emphasized the importance of responsible use.
While CWRU Law, like many law schools, has general prohibitions against AI use in drafting assignments, faculty are encouraged to allow exceptions and to guide students in exploring AI’s capabilities responsibly.
“This is a practice-readiness issue,” Cupar said. “Just like Westlaw and Lexis changed legal research, AI is going to be part of legal work going forward. Our students need to understand it now.”
Balanced approach
Starting with the Class of 2025, Washington University School of Law is embedding generative AI instruction into its first-year Legal Research curriculum. The goal is to ensure that every 1L student gains fluency in both traditional legal research methods and emerging AI tools.
Delivered as a yearlong, one-credit course, the revamped curriculum maintains a strong emphasis on core legal research fundamentals, including court hierarchy, the distinction between binding and persuasive authority, primary and secondary sources and effective strategies for researching legislative and regulatory history.
WashU Law is integrating AI as a tool to be used critically and effectively, not as a replacement for human legal reasoning.
Students receive hands-on training in legal-specific generative AI platforms and develop the skills needed to evaluate AI-generated results, detect hallucinated or inaccurate content, and compare outcomes with traditional research methods.
“WashU Law incorporates AI while maintaining the basics of legal research,” said Peter Hook,associate dean. “By teaching the basics, we teach the skills necessary to evaluate whether AI-produced legal research results are any good.”
Stefanie Lindquist, dean of WashU Law, said this balanced approach preserves the rigor and depth that legal employers value.
“The addition of AI instruction further sharpens that edge by equipping students with the ability to responsibly and strategically apply new technologies in a professional context,” Lindquist said.
Forward-thinking vision
Drake University Law School has launched a new AI Law Certificate Program for J.D. students.
The program is a response to the growing need for legal professionals who understand both the promise and complexity of AI.
Designed for completion during a student’s second and third years, the certificate program emphasizes interdisciplinary collaboration, drawing on expertise from across Drake Law School’s campus, including computer science, art and the Institute for Justice Reform & Innovation.
Students will engage with advanced topics such as machine vision and trademark law, quantum computing and cybersecurity, and the broader ethical and regulatory challenges posed by AI.
Roscoe Jones, Jr., dean of Drake Law School, said the AI Law Certificate empowers students to lead at the intersection of law and technology, whether in private practice, government, nonprofit, policymaking or academia.
“Artificial Intelligence is not just changing industries; it’s reshaping governance, ethics and the very framework of legal systems,” he said.
Simulated, but realistic
Suffolk Law has also launched an online platform that allows students to practice negotiation skills with AI bots programmed to simulate the behavior of seasoned attorneys.
“They’re not scripted. They’re human-like,” she said. “Sometimes polite, sometimes bananas. It mimics real negotiation.”
These interactive experiences in either text or voice mode allow students to practice handling the messiness of legal dialogue, which is an experience hard to replicate with static casebooks or classroom hypotheticals.
Unlike overly accommodating AI assistants, these bots shift tactics and strategies, mirroring the adaptive nature of real-world legal negotiators.
Another tool on the platform supports oral argument prep. Created by Suffolk Law’s legal writing team in partnership with the school’s litigation lab, the AI mock judge engages students in real-time argument rehearsals, asking follow-up questions and testing their case theories.
“It’s especially helpful for students who don’t get much out of reading their outline alone,” O’Leary said. “It makes the lights go on.”
O’Leary also emphasizes the importance of academic integrity. Suffolk Law has a default policy that prohibits use of generative AI on assignments unless a professor explicitly allows it. Still, she said the policy is evolving.
“You can’t ignore the equity issues,” she said, pointing to how students often get help from lawyers in the family or paid tutors. “To prohibit [AI] entirely is starting to feel unrealistic.”
Tools & Platforms
Microsoft pushes billions at AI education for the masses • The Register
After committing more than $13 billion in strategic investments to OpenAI, Microsoft is splashing out billions more to get people using the technology.
On Wednesday, Redmond announced a $4 billion donation of cash and technology to schools and non-profits over the next five years. It’s branding this philanthropic mission as Microsoft Elevate, which is billed as “providing people and organizations with AI skills and tools to thrive in an AI-powered economy.” It will also start the AI Economy Institute (AIEI), a so-called corporate think tank stocked with academics that will be publishing research on how the workforce needs to adapt to AI tech.
The bulk of the money will go toward AI and cloud credits for K-12 schools and community colleges, and Redmond claims 20 million people will “earn an in-demand AI skilling credential” under the scheme, although Microsoft’s record on such vendor-backed certifications is hardly spotless.
“Working in close coordination with other groups across Microsoft, including LinkedIn and GitHub, Microsoft Elevate will deliver AI education and skilling at scale,” said Brad Smith, president and vice chair of Microsoft Corporation, in a blog post. “And it will work as an advocate for public policies around the world to advance AI education and training for others.”
It’s not an entirely new scheme – Redmond already had its Microsoft Philanthropies and Tech for Social Impact charitable organizations, but they are now merging into Elevate. Smith noted Microsoft has already teamed up with North Rhine-Westphalia in Germany to train students on AI, and says similar partnerships across the US education system will follow.
Microsoft is also looking to recruit teachers to the cause.
On Tuesday, Microsoft, along with Anthropic and OpenAI, said it was starting the National Academy for AI Instruction with the American Federation of Teachers to train teachers in AI skills and to pass them on to the next generation. The scheme has received $23 million in funding from the tech giants spread over five years, and aims to train 400,000 teachers at training centers across the US and online.
“AI holds tremendous promise but huge challenges—and it’s our job as educators to make sure AI serves our students and society, not the other way around,” said AFT President Randi Weingarten in a canned statement.
“The direct connection between a teacher and their kids can never be replaced by new technologies, but if we learn how to harness it, set commonsense guardrails and put teachers in the driver’s seat, teaching and learning can be enhanced.”
Meanwhile, the AIEI will sponsor and convene researchers to produce publications, including policy briefs and research reports, on applying AI skills in the workforce, leveraging a global network of academic partners.
Hopefully they can do a better job of it than Redmond’s own staff. After 9,000 layoffs from Microsoft earlier this month, largely in the Xbox division, Matt Turnbull, an executive producer at Xbox Game Studios Publishing, went viral with a spectacularly tone-deaf LinkedIn post (now removed) to former staff members offering AI prompts “to help reduce the emotional and cognitive load that comes with job loss.” ®
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle