Connect with us

AI Insights

A robot shows that machines may one day replace human surgeons | Science

Published

on


Nearly four decades ago, the Defense Advanced Research Projects Agency (DARPA) and NASA began promoting projects that would make it possible to perform surgeries remotely — whether on the battlefield or in space. Out of those initial efforts emerged robotic surgical systems like Da Vinci, which function as an extension of the surgeon, allowing them to carry out minimally invasive procedures using remote controls and 3D vision. But this still involves a human using a sophisticated tool. Now, the integration of generative artificial intelligence and machine learning into the control of systems like Da Vinci are bringing the possibility of autonomous surgical robots closer to reality.

This Wednesday, the journal Science Robotics published the results of a study conducted by researchers at Johns Hopkins and Stanford universities, in which they present a system capable of autonomously performing several steps of a surgical procedure, learning from videos of humans operating and receiving commands in natural language — just like a medical resident would.

Like human learning, the team of scientists had been gradually teaching the robot the necessary steps to complete a surgery. Last year, the Johns Hopkins team, led by Axel Krieger, trained the robot to perform three basic surgical tasks: handling a needle, lifting tissue, and suturing. This training was done through imitation and a machine learning system similar to the one used by ChatGPT, but instead of words and text, it uses a robotic language that translates the machine’s movement angles into mathematical data.

In the new experiment, two experienced human surgeons performed demonstrations of gallbladder removal surgeries on pig tissues outside the animal. They used 34 gallbladders to collect 17 hours of data and 16,000 trajectories, which the machine used to learn. Afterward, the robots, without human intervention and with eight gallbladders they hadn’t seen before, were able to perform some of the 17 tasks required to remove the organ with 100% precision — such as identifying certain ducts and arteries, holding them precisely, strategically placing clips, and cutting with scissors. During the experiments, the model was able to correct its own mistakes and adapt to unforeseen situations.

In 2022, this same team performed the first autonomous robotic surgery on a live animal: a laparoscopy on a pig. But that robot needed the tissue to have special markers, was in a controlled environment, and followed a pre-established surgical plan. In a statement from his institution, Krieger said it was like teaching a robot to drive a carefully mapped-out route. The new experiment just presented would be — for the robot — like driving on a road it had never seen before, relying only on a general understanding of how to drive a car.

José Granell, head of the Department of Otolaryngology and Head and Neck Surgery at HLA Moncloa University Hospital and professor at the European University of Madrid, believes that the Johns Hopkins team’s work “is beginning to approach something that resembles real surgery.”

“The problem with robotic surgery on soft tissue is that biology has a lot of intrinsic variability, and even if you know the technique, in the real world there are many possible scenarios,” explains Granell. “Asking a robot to carve a bone is easy, but with soft tissue, everything is more difficult because it moves. You can’t predict how it will react when you push, how much it will move, whether an artery will tear if you pull too hard,” continues the surgeon, adding: “This technology changes the way we train the sequence of gestures that constitute surgery.”

For Krieger, this advancement takes us “from robots that can perform specific surgical tasks to robots that truly understand surgical procedures.” The team leader behind this innovation, made possible by generative AI, argues: “It’s a crucial distinction that brings us significantly closer to clinically viable autonomous surgical systems, capable of navigating the messy and unpredictable reality of real-life patient care.”

Francisco Clascá, professor of Human Anatomy and Embryology at the Autonomous University of Madrid, welcomes the study, but points out that “it’s a very simple surgery” and is performed on organs from “very young animals, which don’t have the level of deterioration and complications of a 60- or 70-year-old person, which is when this type of surgery is typically needed.” Furthermore, the robot is still much slower than a human performing the same tasks.

Mario Fernández, head of the Head and Neck Surgery department at the Gregorio Marañón General University Hospital in Madrid, finds the advance interesting, but believes that replacing human surgeons with machines “is a long way off.” He also cautions against being dazzled by technology without fully understanding its real benefits, and points out that its high cost means it won’t be accessible to everyone.

“I know a hospital in India, for example, where they have a robot and can perform two surgical sessions per month, operating on two patients. A total of 48 per year. For them, robotic surgery may be a way to practice and learn, but it’s not a reality for the patients there,” says Fernández. He believes we should appreciate technological progress, but surgery must be judged by what it actually delivers to patients. As a contrasting example, he points out that “a technique called transoral ultrasound surgery, which was developed in Madrid and is available worldwide, is performed on six patients every day.”

Krieger believes that their proof of concept shows it’s possible to perform complex surgical procedures autonomously, and that their imitation learning system can be applied to more types of surgeries — something they will continue to test with other interventions.

Looking ahead, Granell points out that, beyond overcoming technical challenges, the adoption of surgical robots will be slow because in surgery, “we are very conservative about patient safety.”

He also raises philosophical questions, such as overcoming Isaac Asimov’s First and Second Laws of Robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm,” and “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.” This specialist highlights the apparent contradiction in the fact that human surgeons “do cause harm, but in pursuit of the patient’s benefit; and this is a dilemma that [for a robot] will have to be resolved.”

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition



Source link

AI Insights

Teachers Training on AI

Published

on


MOBILE, Ala. (WALA) – Some leading tech companies are investing millions to train teachers on how to use artificial intelligence. The $23 million initiative is backed by Microsoft, OpenAI, Anthropic, and two teachers’ unions. The goal is to train 400,000 kindergarten through 12th-grade teachers in artificial intelligence over the next five years. The National Academy of AI Instruction announced the effort. The group states that it will develop an AI training curriculum for teachers that can be distributed online and at an in-person campus in New York City.

The announcement comes as schools, teachers, and parents grapple with whether—and how—AI should be used in the classroom. Educators want to ensure students know how to use a technology that’s already transforming workplaces, while teachers can use AI to automate some tasks and spend more time engaging with students.

Samsung unveils its new line of foldable devices at Unpacked

The future is here—Samsung is showcasing its future-ready smartphones! Check out the new Galaxy Z Fold 7 and the Z Flip 7 taking center stage at the company’s latest Unpacked event. The Korean electronics company unveiled the upgrades, including new versions of its watch, and also announced an expanded partnership with Google to inject more artificial intelligence into its foldable lineup. For example, users can access AI by speaking to their watch! Oh, and yes… it also tells you the time.

The Fold 7 will retail starting at $1,999. Pre-orders start today, and the device will hit shelves on July 25.

The Galaxy Z Flip 7 will retail for $1,099.99 and the Flip 7 FE starts at $899.99. Pre-orders for both devices began Wednesday and both will be available generally on July 25.



Source link

Continue Reading

AI Insights

AI vs Supercomputers round 1: galaxy simulation goes to AI

Published

on


Jul. 10, 2025
Press Release

Physics / Astronomy


Computing / Math

In the first study of its kind, researchers led by Keiya Hirashima at the RIKEN Center for Interdisciplinary Theoretical and Mathematical Sciences (iTHEMS) in Japan, along with colleagues from the Max Planck Institute for Astrophysics (MPA) and the Flatiron Institute, have used machine learning, a type of artificial intelligence, to dramatically speed up the processing time when simulating galaxy evolution coupled with supernova explosion. This approach could help us understand the origins of our own galaxy, particularly the elements essential for life in the Milky Way.

Understanding how galaxies form is a central problem for astrophysicists. Although we know that powerful events like supernovae can drive galaxy evolution, we cannot simply look to the night sky and see it happen. Scientists rely on numerical simulations that are based on large amounts of data collected from telescopes and other devices that measure aspects of interstellar space. Simulations must account for gravity and hydrodynamics, as well as other complex aspects of astrophysical thermo-chemistry.

On top of this, they must have a high temporal resolution, meaning that the time between each 3D snapshot of the evolving galaxy must be small enough so that critical events are not missed. For example, capturing the initial phase of supernova shell expansion requires a timescale of mere hundreds of years, which is 1000 times smaller than typical simulations of interstellar space can achieve. In fact, a typical supercomputer takes 1-2 years to carry out a simulation of a relatively small galaxy at the proper temporal resolution.

Getting over this timestep bottleneck was the main goal of the new study. By incorporating AI into their data-driven model, the research group was able to match the output of a previously modeled dwarf galaxy but got the result much more quickly. “When we use our AI model, the simulation is about four times faster than a standard numerical simulation,” says Hirashima. “This corresponds to a reduction of several months to half a year’s worth of computation time. Critically, our AI-assisted simulation was able to reproduce the dynamics important for capturing galaxy evolution and matter cycles, including star formation and galaxy outflows.”

Like most machine learning models, the researchers’ new model is trained using one set of data and then becomes able to predict outcomes based on a new set of data. In this case, the model incorporated a programmed neural network and was trained on 300 simulations of an isolated supernova in a molecular cloud that massed one million of our suns. After training, the model could predict the density, temperature, and 3D velocities of gas 100,000 years after a supernova explosion. Compared with direct numerical simulations such as those performed by supercomputers, the new model yielded similar structures and star formation history but took four times less time to compute.

According to Hirashima, “our AI-assisted framework will allow high-resolution star-by-star simulations of heavy galaxies, such as the Milky Way, with the goal of predicting the origin of the solar system and the elements essential for the birth of life.”

Currently, the lab is using the new framework to run a Milky Way-sized galaxy simulation.

Rate this article

Reference

Hirashima et al. (2025) ASURA-FDPS-ML: Star-by-star Galaxy Simulations Accelerated by Surrogate Modeling for Supernova Feedback. Astrophys J. doi: 10.3847/1538-4357/add689

Contact

Keiya Hirashima, Special Postdoctoral Researcher

Division of Fundamental Mathematical Science, RIKEN Center for Interdisciplinary Theoretical and Mathematical Sciences (iTHEMS)

Adam Phillips
RIKEN Communications Division
Email: adam.phillips [at] riken.jp

The simulated galaxy after 200 million years. While the simulations look very similar with and without the machine learning AI model, the AI model performed 4 times as fast, completing large scale simulation in a matter of months rather than years.







Source link

Continue Reading

AI Insights

From Kitchen to Front of House, Restaurants Deploy AI Robots

Published

on


Restaurants are integrating artificial intelligence (AI)-powered robots end-to-end in their operations, doing tasks such as serving food to diners, cooking meals, delivering food and even mixing cocktails.

Robots are taking more active roles in both customer-facing and back-kitchen tasks, as restaurants face a perfect storm of challenges that include rising labor and food costs, persistent workforce shortages, and growing consumer demand for efficient service.

The smart restaurant robot industry is expected to exceed $10 billion by 2030, driven by deployment across applications such as delivery, order-taking and table service, according to Archive Market Research.

Restaurants are also deploying AI for administrative tasks. According to a June survey for PYMNTS’ SMB Growth Series, 74.4% of restaurants find AI to be “very or extremely effective” in accomplishing business tasks.

The top three reasons cited for using AI were reduce costs, automate tasks and adopt standards and accreditation, according to the PYMNTS report. However, only a third are using AI.

Robotics trends in restaurants include:

1. Robots delivering food to customers.

Uber Eats recently launched autonomous delivery robots developed by Serve Robotics in the Dallas-Fort Worth metro area. It is part of Serve’s plan to deploy 2,000 AI-powered delivery robots in the U.S. this year.

The launch follows Serve’s deployment of delivery bots in Los Angeles, Atlanta and Miami.

Serve said its latest Gen3 robots can carry 13 gallons of cargo, including four 16-inch pizzas, and travel at up to 11 miles per hour. It has an all-day battery and can navigate all types of terrain. It uses sensors for Level 4 autonomy, meaning it doesn’t need human supervision when within designated areas.

Uber Eats also partnered with Avride to launch delivery bots in Jersey City, N.J., the first city on the East Coast with the service. The service is already available in Austin and Dallas.

Avride bots can carry up to 55 pounds and travel at 5 miles per hour on sidewalks, navigating using LiDAR, cameras and ultrasonic sensors. They can operate in various weather conditions, travel up to 12 hours between charges, and secure meals in temperature‑controlled compartments.

2. Robot waiters are serving tables in busy dining rooms.

Robot waiters have moved beyond novelty to practical usage. In several U.S. restaurants, robots equipped with multi‑tray delivery systems, obstacle avoidance and SLAM (Simultaneous Localization and Mapping) navigation are serving diners alongside human wait staff.

In January, South Korean giant LG Electronics acquired a 51% stake in Bear Robotics, a Silicon Valley company that makes AI-driven autonomous service robots. Founded in 2017, Bear has been serving the U.S., South Korean and Japanese markets. The acquisition would enable LG to expand its presence in the commercial robot market.

3. Robots fry, flip and assemble food in the kitchen.

In January, Miso Robotics launched its next-generation “Flippy Fry Station” robot for restaurants. It can cook French fries, onion rings, chicken, tacos and other fried items.

The new Flippy robot is half the size of older models and can move twice as fast, according to the company. It is also more reliable and installs in 75% less time — a few hours — in existing kitchens.

It was designed in collaboration with the White Castle burger chain. Older Flippy models were already installed in White Castle, Jack in the Box, CaliBurger and concession outlets at Dodger Stadium in Los Angeles.

5. Robots serve as baristas and bartenders.

Richtech Robotics’ “Adam,” a barista and bartender robot, has served 16,000 drinks at Clouffee & Tea in Las Vegas, according to the company, after just four months in operation. The robot served a variety of milk teas, coffees and desserts — including boba tea.

The robot is powered by advanced AI and Nvidia technology. The robot’s vision technology can monitor how much liquid is poured into cups and adjusts pour angle and flow rate as necessary.

Adam is also deployed at Walmart, the Golden Corral restaurant chain, Botbar Coffee in Oakland, California, among other partners.

Meanwhile, Makr Shakr’s robotic bartenders — developed in partnership with MIT, Coca‑Cola and Bacardi — operate in cruise ships, airports and hotels worldwide, mixing cocktails in under 60 seconds.

 

Read more: Applebee’s and IHOP to Deploy AI-Powered Tech Support and Personalization

Read more: Chipotle: AI Hiring Platform Cuts Hiring Time by 75%

Read more: How Hardee’s Largest Franchisee Uses AI to Serve Up Efficiency and Profits

Photos, from top: MakrShakr’s robot bartenders | Credit: MakrShakr | Credit: Serve Robotics | Credit: Bear Robotics





Source link

Continue Reading

Trending