Connect with us

AI Insights

From Kitchen to Front of House, Restaurants Deploy AI Robots

Published

on


Restaurants are integrating artificial intelligence (AI)-powered robots end-to-end in their operations, doing tasks such as serving food to diners, cooking meals, delivering food and even mixing cocktails.

Robots are taking more active roles in both customer-facing and back-kitchen tasks, as restaurants face a perfect storm of challenges that include rising labor and food costs, persistent workforce shortages, and growing consumer demand for efficient service.

The smart restaurant robot industry is expected to exceed $10 billion by 2030, driven by deployment across applications such as delivery, order-taking and table service, according to Archive Market Research.

Restaurants are also deploying AI for administrative tasks. According to a June survey for PYMNTS’ SMB Growth Series, 74.4% of restaurants find AI to be “very or extremely effective” in accomplishing business tasks.

The top three reasons cited for using AI were reduce costs, automate tasks and adopt standards and accreditation, according to the PYMNTS report. However, only a third are using AI.

Robotics trends in restaurants include:

1. Robots delivering food to customers.

Uber Eats recently launched autonomous delivery robots developed by Serve Robotics in the Dallas-Fort Worth metro area. It is part of Serve’s plan to deploy 2,000 AI-powered delivery robots in the U.S. this year.

The launch follows Serve’s deployment of delivery bots in Los Angeles, Atlanta and Miami.

Serve said its latest Gen3 robots can carry 13 gallons of cargo, including four 16-inch pizzas, and travel at up to 11 miles per hour. It has an all-day battery and can navigate all types of terrain. It uses sensors for Level 4 autonomy, meaning it doesn’t need human supervision when within designated areas.

Uber Eats also partnered with Avride to launch delivery bots in Jersey City, N.J., the first city on the East Coast with the service. The service is already available in Austin and Dallas.

Avride bots can carry up to 55 pounds and travel at 5 miles per hour on sidewalks, navigating using LiDAR, cameras and ultrasonic sensors. They can operate in various weather conditions, travel up to 12 hours between charges, and secure meals in temperature‑controlled compartments.

2. Robot waiters are serving tables in busy dining rooms.

Robot waiters have moved beyond novelty to practical usage. In several U.S. restaurants, robots equipped with multi‑tray delivery systems, obstacle avoidance and SLAM (Simultaneous Localization and Mapping) navigation are serving diners alongside human wait staff.

In January, South Korean giant LG Electronics acquired a 51% stake in Bear Robotics, a Silicon Valley company that makes AI-driven autonomous service robots. Founded in 2017, Bear has been serving the U.S., South Korean and Japanese markets. The acquisition would enable LG to expand its presence in the commercial robot market.

3. Robots fry, flip and assemble food in the kitchen.

In January, Miso Robotics launched its next-generation “Flippy Fry Station” robot for restaurants. It can cook French fries, onion rings, chicken, tacos and other fried items.

The new Flippy robot is half the size of older models and can move twice as fast, according to the company. It is also more reliable and installs in 75% less time — a few hours — in existing kitchens.

It was designed in collaboration with the White Castle burger chain. Older Flippy models were already installed in White Castle, Jack in the Box, CaliBurger and concession outlets at Dodger Stadium in Los Angeles.

5. Robots serve as baristas and bartenders.

Richtech Robotics’ “Adam,” a barista and bartender robot, has served 16,000 drinks at Clouffee & Tea in Las Vegas, according to the company, after just four months in operation. The robot served a variety of milk teas, coffees and desserts — including boba tea.

The robot is powered by advanced AI and Nvidia technology. The robot’s vision technology can monitor how much liquid is poured into cups and adjusts pour angle and flow rate as necessary.

Adam is also deployed at Walmart, the Golden Corral restaurant chain, Botbar Coffee in Oakland, California, among other partners.

Meanwhile, Makr Shakr’s robotic bartenders — developed in partnership with MIT, Coca‑Cola and Bacardi — operate in cruise ships, airports and hotels worldwide, mixing cocktails in under 60 seconds.

 

Read more: Applebee’s and IHOP to Deploy AI-Powered Tech Support and Personalization

Read more: Chipotle: AI Hiring Platform Cuts Hiring Time by 75%

Read more: How Hardee’s Largest Franchisee Uses AI to Serve Up Efficiency and Profits

Photos, from top: MakrShakr’s robot bartenders | Credit: MakrShakr | Credit: Serve Robotics | Credit: Bear Robotics





Source link

AI Insights

Artificial Intelligence Coverage Under Cyber Insurance

Published

on


A small but growing number of cyber insurers are incorporating language into their policies that specifically addresses risks from artificial intelligence (AI). The June 2025 issue of The Betterley Report’s Cyber/Privacy Market Survey identifies at least three insurers that are incorporating specific definitions or terms for AI. This raises an important question for policyholders: Does including specific language for AI in a coverage grant (or exclusion) change the terms of coverage offered?

To be sure, at present few cyber policies expressly address AI. Most insurers appear to be maintaining a “wait and see” approach; they are monitoring the risks posed by AI, but they have not revised their policies. Nevertheless, a few insurers have sought to reassure customers that coverage is available for AI-related events. One insurer has gone so far as to state that its policy “provides affirmative cover for cyber attacks that utilize AI, ensuring that the business is covered for any of the losses associated with such attacks.” To the extent that AI is simply one vector for a data breach or other cyber incident that would otherwise be an insured event, however, it is unclear whether adding AI-specific language expands coverage. On the other side of the coin, some insurers have sought to limit exposure by incorporating exclusions for certain AI events.   

To assess the impact of these changes, it is critical to ask: What does artificial intelligence even mean?

This is a difficult question to answer. The field of AI is vast and constantly evolving. AI can curate social media feeds, recommend shows and products to consumers, generate email auto-responses, and more. Banks use AI to detect fraud. Driving apps use it to predict traffic. Search engines use it to rank and recommend search results. AI pervades daily life and extends far beyond the chatbots and other generative AI tools that have been the focus of recent news and popular attention.

At a more technical level, AI also encompasses numerous nesting and overlapping subfields.  One major subfield, machine learning, encompasses techniques ranging from linear regression to decision trees. It also includes neural networks, which, when layered together, can be used to power the subfield of deep learning. Deep learning, in turn, is used by the subfield of generative AI. And generative AI itself can take different forms, such as large language models, diffusion models, generative adversarial networks, and neural radiance fields.

That may be why most insurers have been reluctant to define artificial intelligence. A policy could name certain concrete examples of AI applications, but it would likely miss many others, and it would risk falling behind as AI was adapted for other uses. The policy could provide a technical definition, but that could be similarly underinclusive and inflexible. Even referring to subsets such as “generative AI” could run into similar issues, given the complex techniques and applications for the technology.

The risk, of course, is that by not clearly defining artificial intelligence, a policy that grants or excludes coverage for AI could have different coverage consequences than either the insurer or insured expected. Policyholders should pay particular attention to provisions purporting to exclude loss or liability from AI risks, and consider what technologies are in use that could offer a basis to deny coverage for the loss. We will watch with interest cyber insurers’ approach to AI — will most continue to omit references to AI, or will more insurers expressly address AI in their policies?

Listen to this article

This article was co-authored by Anna Hamel



Source link

Continue Reading

AI Insights

Code Green or Code Red? The Untold Climate Cost of Artificial Intelligence

Published

on


As the world races to harness artificial intelligence, few are pausing to ask a critical question: What is AI doing to the planet?

AI is being heralded as a game-changer in the global fight against climate change. AI is already assisting scientists in modeling rising temperatures and extreme weather phenomena, enabling decision-making bodies to predict and prepare for unexpected weather, while allowing energy systems to become smarter and more efficient. According to the World Economic Forum, AI has the potential to contribute up to 5.1 trillion dollars annually to the global economy, under the condition that it is deployed sustainably during the climate transition (WEF, 2025).

Beneath the sleek interfaces and climate dashboards lies a growing environmental cost. The widespread use of generative AI, in particular, is creating a new carbon frontier, one that we’re just beginning to untangle and understand.

Training large-scale AI models is energy-intensive, according to a 2024 MIT report. Training a single GPT-3 sized model can consume enough electricity to power almost 120 U.S. homes for a year, which totals up to 1.300 megawatt-hours of electricity. AI systems, once deployed, are not static, since they continue to consume energy each time a user interacts with them. For example, an AI-generated image may require as much energy as watching a short video on an online platform, while large language model queries require almost 10 times more energy than a typical Google search (MIT, 2024).

As AI becomes embedded into everything from online search to logistics and social media, this energy use is multiplying at scale. The International Energy Agency (IEA) warns that by 2026, data center electricity consumption could double globally, driven mainly by the rise of AI and cryptocurrency. Taking into account the recent developments regarding the Digital Euro, the discussion instantly receives more value. Without rapid decarbonization of energy grids, this could significantly increase global emissions, undermining progress on climate goals (IEA,2024).

Sotiris Anastasopoulos/ With data from the IEA’s official website.

The climate irony is real: AI is both the solution and the multiplier to Earth’s climate challenges.

Still, when used responsibly, AI remains a powerful ally. The UNFCCC’s 2023 “AI for Climate Action” roadmap outlines dozens of promising, climate-friendly applications. AI can detect deforestation from satellite imagery, track methane leaks, help decarbonize supply chains, and forecast the spread of wildfires. In agriculture, AI systems can optimize irrigation and fertilizer use, helping reduce emissions and protect soil. In the energy sector, AI enables real-time management of grids, integrating variable sources like solar and wind while improving reliability. But to unlock this potential, the conversation around AI must evolve, from excitement about its capabilities to accountability for its impact.

This starts with transparency. Today, few AI developers publicly report the energy or emissions cost of training and running their models. That needs to change. The IEA calls for AI models to be accompanied by “energy use disclosures” and impact assessments. Governments and regulators should enforce such standards, similarly to industrial emissions or vehicle efficiency (UNFCC, 2023).

Second, green infrastructure must become the default. Data centers must be powered by renewable energy, not fossil fuels. AI models must be optimized for efficiency, not just performance. Instead of racing toward ever-larger models, we should ask what the climate cost of model inflation is and if it’s worth it (UNFCC, 2023).

Third, we need to question the uses of AI itself. Not every application is essential. Does society actually benefit from energy-intensive image generation tools for trivial entertainment or advertising? While AI can accelerate climate solutions, it can also accelerate consumption, misinformation, and surveillance. A climate-conscious AI agenda must weigh trade-offs, not just celebrate innovation (UNFCC,2023).

Finally, equity matters. As the UNFCC report emphasizes, the AI infrastructure powering the climate transition is heavily concentrated in the Global North. Meanwhile, the Global South, home to many of the world’s most climate-vulnerable populations, lacks access to these tools, data, and services. An inclusive AI-climate agenda must invest in capacity-building, data access,  and technological advancements to ensure no region is left behind (UNFCC, 2023).

Artificial intelligence is not green or dirty by its nature. Like all tools, its impact depends on how and why we use it. We are still early in the AI revolution to shape its trajectory, but not for long.

The stakes are planetary. If deployed wisely, AI could help the transition to a net-zero future. If mismanaged, it risks becoming just another accelerant of a warming world.

Technology alone will not solve the climate crisis. But climate solutions without responsible technology are bound to fail.

*Sotiris Anastasopoulos is a student researcher at the Institute of European Integration and Policy of the UoA. He is an active member of YCDF and AEIA and currently serves as a European Climate Pact Ambassador. 

This op-ed is part of To BHMA International Edition’s NextGen Corner, a platform for fresh voices on the defining issues of our time



Source link

Continue Reading

AI Insights

AI hallucination in Mike Lindell case serves as a stark warning : NPR

Published

on


MyPillow CEO Mike Lindell arrives at a gathering of supporters of Donald Trump near Trump’s residence in Palm Beach, Fla., on April 4, 2023. On July 7, 2025, Lindell’s lawyers were fined thousands of dollars for submitting a legal filing riddled with AI-generated mistakes.

Octavio Jones/Getty Images


hide caption

toggle caption

Octavio Jones/Getty Images

A federal judge ordered two attorneys representing MyPillow CEO Mike Lindell in a Colorado defamation case to pay $3,000 each after they used artificial intelligence to prepare a court filing filled with a host of mistakes and citations of cases that didn’t exist.

Christopher Kachouroff and Jennifer DeMaster violated court rules when they filed the document in February filled with more than two dozen mistakes — including hallucinated cases, meaning fake cases made up by AI tools, Judge Nina Y. Wang of the U.S. District Court in Denver ruled Monday.

“Notwithstanding any suggestion to the contrary, this Court derives no joy from sanctioning attorneys who appear before it,” Wang wrote in her decision. “Indeed, federal courts rely upon the assistance of attorneys as officers of the court for the efficient and fair administration of justice.”

The use of AI by lawyers in court is not, itself illegal. But Wang found the lawyers violated a federal rule that requires lawyers to certify that claims they make in court are “well grounded” in the law. Turns out, fake cases don’t meet that bar.

Kachouroff and DeMaster didn’t respond to NPR’s request for comment.

The error-riddled court filing was part of a defamation case involving Lindell, the MyPillow creator, President Trump supporter and conspiracy theorist known for spreading lies about the 2020 election. Last month, Lindell lost this case being argued in front of Wang. He was ordered to pay Eric Coomer, a former employee of Denver-based Dominion Voting Systems, more than $2 million after claiming Coomer and Dominion used election equipment to flip votes to former President Joe Biden.

The financial sanctions, and reputational damage, for the two lawyers are a stark reminder for attorneys who, like many others, are increasingly using artificial intelligence in their work, according to Maura Grossman, a professor at the University of Waterloo’s David R. Cheriton School of Computer Science and an adjunct law professor at York University’s Osgoode Hall Law School.

Grossman said the $3,000 fines “in the scheme of things was reasonably light, given these were not unsophisticated lawyers who just really wouldn’t know better. The kind of errors that were made here … were egregious.”

There have been a host of high-profile cases where the use of generative AI has gone wrong for lawyers and others filing legal cases, Grossman said. It’s become a familiar trend in courtrooms across the country: Lawyers are sanctioned for submitting motions and other court filings filled with case citations that are not real and created by generative AI.

Damien Charlotin tracks court cases from across the world where generative AI produced hallucinated content and where a court or tribunal specifically levied warnings or other punishments. There are 206 cases identified as of Thursday — and that’s only since the spring, he told NPR. There were very few cases before April, he said, but for months since there have been cases “popping up every day.”

Charlotin’s database doesn’t cover every single case where there is a hallucination. But he said, “I suspect there are many, many, many more, but just a lot of courts and parties prefer not to address it because it’s very embarrassing for everyone involved.”

What went wrong in the MyPillow filing

The $3,000 fine for each attorney, Judge Wang wrote in her order this week, is “the least severe sanction adequate to deter and punish defense counsel in this instance.”

The judge wrote that the two attorneys didn’t provide any proper explanation of how these mistakes happened, “most egregiously, citation of cases that do not exist.”

Wang also said Kachouroff and DeMaster were not forthcoming when questioned about whether the motion was generated using artificial intelligence.

Kachouroff, in response, said in court documents that it was DeMaster who “mistakenly filed” a draft version of this filing rather than the right copy that was more carefully edited and didn’t include hallucinated cases.

But Wang wasn’t persuaded that the submission of the filing was an “inadvertent error.” In fact, she called out Kachouroff for not being honest when she questioned him.

“Not until this Court asked Mr. Kachouroff directly whether the Opposition was the product of generative artificial intelligence did Mr. Kachouroff admit that he did, in fact, use generative artificial intelligence,” Wang wrote.

Grossman advised other lawyers who find themselves in the same position as Kachouroff to not attempt to cover it up, and fess up to the judge as soon as possible.

“You are likely to get a harsher penalty if you don’t come clean,” she said.

An illustration picture shows ChatGPT artificial intelligence software, which generates human-like conversation, in February 2023 in Lierde, Belgium. Experts say AI can be incredibly useful for lawyers — they just have to verify their work.

An illustration picture shows ChatGPT artificial intelligence software, which generates human-like conversation, in February 2023 in Lierde, Belgium. Experts say AI can be incredibly useful for lawyers — they just have to verify their work.

Nicolas Maeterlinck/BELGA MAG/AFP via Getty Images


hide caption

toggle caption

Nicolas Maeterlinck/BELGA MAG/AFP via Getty Images

Trust and verify

Charlotin has found three main issues when lawyers, or others, use AI to file court documents: The first are the fake cases created, or hallucinated, by AI chatbots.

The second is AI creates a fake quote from a real case.

The third is harder to spot, he said. That’s when the citation and case name are correct but the legal argument being cited is not actually supported by the case that is sourced, Charlotin said.

This case involving the MyPillow lawyers is just a microcosm of the growing dilemma of how courts and lawyers can strike the balance between welcoming life-changing technology and using it responsibly in court. The use of AI is growing faster than authorities can make guardrails around its use.

It’s even being used to present evidence in court, Grossman said, and to provide victim impact statements.

Earlier this year, a judge on a New York state appeals court was furious after a plaintiff, representing himself, tried to use a younger, more handsome AI-generated avatar to argue his case for him, CNN reported. That was swiftly shut down.

Despite the cautionary tales that make headlines, both Grossman and Charlotin view AI as an incredibly useful tool for lawyers and one they predict will be used in court more, not less.

Rules over how best to use AI differ from one jurisdiction to the next. Judges have created their own standards, requiring lawyers and those representing themselves in court to submit AI disclosures when it’s been used. In a few instances judges in North Carolina, Ohio, Illinois and Montana have established various prohibitions on the use of AI in their courtrooms, according to a database created by the law firm Ropes & Gray.

The American Bar Association, the national representative of the legal profession, issued its first ethical guidance on the use of AI last year. The organization warned that because these tools “are subject to mistakes, lawyers’ uncritical reliance on content created by a [generative artificial intelligence] tool can result in inaccurate legal advice to clients or misleading representations to courts and third parties.”

It continued, “Therefore, a lawyer’s reliance on, or submission of, a GAI tool’s output—without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation …”

The Advisory Committee on Evidence Rules, the group responsible for studying and recommending changes to the national rules of evidence for federal courts, has been slow to act and is still working on amendments for the use of AI for evidence.

In the meantime, Grossman has this suggestion for anyone who uses AI: “Trust nothing, verify everything.”



Source link

Continue Reading

Trending