AI Insights
A robot shows that machines may one day replace human surgeons | Science
Nearly four decades ago, the Defense Advanced Research Projects Agency (DARPA) and NASA began promoting projects that would make it possible to perform surgeries remotely — whether on the battlefield or in space. Out of those initial efforts emerged robotic surgical systems like Da Vinci, which function as an extension of the surgeon, allowing them to carry out minimally invasive procedures using remote controls and 3D vision. But this still involves a human using a sophisticated tool. Now, the integration of generative artificial intelligence and machine learning into the control of systems like Da Vinci are bringing the possibility of autonomous surgical robots closer to reality.
This Wednesday, the journal Science Robotics published the results of a study conducted by researchers at Johns Hopkins and Stanford universities, in which they present a system capable of autonomously performing several steps of a surgical procedure, learning from videos of humans operating and receiving commands in natural language — just like a medical resident would.
Like human learning, the team of scientists had been gradually teaching the robot the necessary steps to complete a surgery. Last year, the Johns Hopkins team, led by Axel Krieger, trained the robot to perform three basic surgical tasks: handling a needle, lifting tissue, and suturing. This training was done through imitation and a machine learning system similar to the one used by ChatGPT, but instead of words and text, it uses a robotic language that translates the machine’s movement angles into mathematical data.
In the new experiment, two experienced human surgeons performed demonstrations of gallbladder removal surgeries on pig tissues outside the animal. They used 34 gallbladders to collect 17 hours of data and 16,000 trajectories, which the machine used to learn. Afterward, the robots, without human intervention and with eight gallbladders they hadn’t seen before, were able to perform some of the 17 tasks required to remove the organ with 100% precision — such as identifying certain ducts and arteries, holding them precisely, strategically placing clips, and cutting with scissors. During the experiments, the model was able to correct its own mistakes and adapt to unforeseen situations.
In 2022, this same team performed the first autonomous robotic surgery on a live animal: a laparoscopy on a pig. But that robot needed the tissue to have special markers, was in a controlled environment, and followed a pre-established surgical plan. In a statement from his institution, Krieger said it was like teaching a robot to drive a carefully mapped-out route. The new experiment just presented would be — for the robot — like driving on a road it had never seen before, relying only on a general understanding of how to drive a car.
José Granell, head of the Department of Otolaryngology and Head and Neck Surgery at HLA Moncloa University Hospital and professor at the European University of Madrid, believes that the Johns Hopkins team’s work “is beginning to approach something that resembles real surgery.”
“The problem with robotic surgery on soft tissue is that biology has a lot of intrinsic variability, and even if you know the technique, in the real world there are many possible scenarios,” explains Granell. “Asking a robot to carve a bone is easy, but with soft tissue, everything is more difficult because it moves. You can’t predict how it will react when you push, how much it will move, whether an artery will tear if you pull too hard,” continues the surgeon, adding: “This technology changes the way we train the sequence of gestures that constitute surgery.”
For Krieger, this advancement takes us “from robots that can perform specific surgical tasks to robots that truly understand surgical procedures.” The team leader behind this innovation, made possible by generative AI, argues: “It’s a crucial distinction that brings us significantly closer to clinically viable autonomous surgical systems, capable of navigating the messy and unpredictable reality of real-life patient care.”
Francisco Clascá, professor of Human Anatomy and Embryology at the Autonomous University of Madrid, welcomes the study, but points out that “it’s a very simple surgery” and is performed on organs from “very young animals, which don’t have the level of deterioration and complications of a 60- or 70-year-old person, which is when this type of surgery is typically needed.” Furthermore, the robot is still much slower than a human performing the same tasks.
Mario Fernández, head of the Head and Neck Surgery department at the Gregorio Marañón General University Hospital in Madrid, finds the advance interesting, but believes that replacing human surgeons with machines “is a long way off.” He also cautions against being dazzled by technology without fully understanding its real benefits, and points out that its high cost means it won’t be accessible to everyone.
“I know a hospital in India, for example, where they have a robot and can perform two surgical sessions per month, operating on two patients. A total of 48 per year. For them, robotic surgery may be a way to practice and learn, but it’s not a reality for the patients there,” says Fernández. He believes we should appreciate technological progress, but surgery must be judged by what it actually delivers to patients. As a contrasting example, he points out that “a technique called transoral ultrasound surgery, which was developed in Madrid and is available worldwide, is performed on six patients every day.”
Krieger believes that their proof of concept shows it’s possible to perform complex surgical procedures autonomously, and that their imitation learning system can be applied to more types of surgeries — something they will continue to test with other interventions.
Looking ahead, Granell points out that, beyond overcoming technical challenges, the adoption of surgical robots will be slow because in surgery, “we are very conservative about patient safety.”
He also raises philosophical questions, such as overcoming Isaac Asimov’s First and Second Laws of Robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm,” and “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.” This specialist highlights the apparent contradiction in the fact that human surgeons “do cause harm, but in pursuit of the patient’s benefit; and this is a dilemma that [for a robot] will have to be resolved.”
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
AI Insights
Teachers Training on AI
MOBILE, Ala. (WALA) – Some leading tech companies are investing millions to train teachers on how to use artificial intelligence. The $23 million initiative is backed by Microsoft, OpenAI, Anthropic, and two teachers’ unions. The goal is to train 400,000 kindergarten through 12th-grade teachers in artificial intelligence over the next five years. The National Academy of AI Instruction announced the effort. The group states that it will develop an AI training curriculum for teachers that can be distributed online and at an in-person campus in New York City.
The announcement comes as schools, teachers, and parents grapple with whether—and how—AI should be used in the classroom. Educators want to ensure students know how to use a technology that’s already transforming workplaces, while teachers can use AI to automate some tasks and spend more time engaging with students.
Samsung unveils its new line of foldable devices at Unpacked
The future is here—Samsung is showcasing its future-ready smartphones! Check out the new Galaxy Z Fold 7 and the Z Flip 7 taking center stage at the company’s latest Unpacked event. The Korean electronics company unveiled the upgrades, including new versions of its watch, and also announced an expanded partnership with Google to inject more artificial intelligence into its foldable lineup. For example, users can access AI by speaking to their watch! Oh, and yes… it also tells you the time.
The Fold 7 will retail starting at $1,999. Pre-orders start today, and the device will hit shelves on July 25.
The Galaxy Z Flip 7 will retail for $1,099.99 and the Flip 7 FE starts at $899.99. Pre-orders for both devices began Wednesday and both will be available generally on July 25.
Copyright 2025 WALA. All rights reserved.
AI Insights
AI vs Supercomputers round 1: galaxy simulation goes to AI
Jul. 10, 2025
Press Release
Physics / Astronomy
Computing / Math
In the first study of its kind, researchers led by Keiya Hirashima at the RIKEN Center for Interdisciplinary Theoretical and Mathematical Sciences (iTHEMS) in Japan, along with colleagues from the Max Planck Institute for Astrophysics (MPA) and the Flatiron Institute, have used machine learning, a type of artificial intelligence, to dramatically speed up the processing time when simulating galaxy evolution coupled with supernova explosion. This approach could help us understand the origins of our own galaxy, particularly the elements essential for life in the Milky Way.
Understanding how galaxies form is a central problem for astrophysicists. Although we know that powerful events like supernovae can drive galaxy evolution, we cannot simply look to the night sky and see it happen. Scientists rely on numerical simulations that are based on large amounts of data collected from telescopes and other devices that measure aspects of interstellar space. Simulations must account for gravity and hydrodynamics, as well as other complex aspects of astrophysical thermo-chemistry.
On top of this, they must have a high temporal resolution, meaning that the time between each 3D snapshot of the evolving galaxy must be small enough so that critical events are not missed. For example, capturing the initial phase of supernova shell expansion requires a timescale of mere hundreds of years, which is 1000 times smaller than typical simulations of interstellar space can achieve. In fact, a typical supercomputer takes 1-2 years to carry out a simulation of a relatively small galaxy at the proper temporal resolution.
Getting over this timestep bottleneck was the main goal of the new study. By incorporating AI into their data-driven model, the research group was able to match the output of a previously modeled dwarf galaxy but got the result much more quickly. “When we use our AI model, the simulation is about four times faster than a standard numerical simulation,” says Hirashima. “This corresponds to a reduction of several months to half a year’s worth of computation time. Critically, our AI-assisted simulation was able to reproduce the dynamics important for capturing galaxy evolution and matter cycles, including star formation and galaxy outflows.”
Like most machine learning models, the researchers’ new model is trained using one set of data and then becomes able to predict outcomes based on a new set of data. In this case, the model incorporated a programmed neural network and was trained on 300 simulations of an isolated supernova in a molecular cloud that massed one million of our suns. After training, the model could predict the density, temperature, and 3D velocities of gas 100,000 years after a supernova explosion. Compared with direct numerical simulations such as those performed by supercomputers, the new model yielded similar structures and star formation history but took four times less time to compute.
According to Hirashima, “our AI-assisted framework will allow high-resolution star-by-star simulations of heavy galaxies, such as the Milky Way, with the goal of predicting the origin of the solar system and the elements essential for the birth of life.”
Currently, the lab is using the new framework to run a Milky Way-sized galaxy simulation.
Rate this article
Reference
Contact
Keiya Hirashima, Special Postdoctoral Researcher
Division of Fundamental Mathematical Science, RIKEN Center for Interdisciplinary Theoretical and Mathematical Sciences (iTHEMS)
Adam Phillips
RIKEN Communications Division
Email: adam.phillips [at] riken.jp
The simulated galaxy after 200 million years. While the simulations look very similar with and without the machine learning AI model, the AI model performed 4 times as fast, completing large scale simulation in a matter of months rather than years.
AI Insights
From Kitchen to Front of House, Restaurants Deploy AI Robots
Restaurants are integrating artificial intelligence (AI)-powered robots end-to-end in their operations, doing tasks such as serving food to diners, cooking meals, delivering food and even mixing cocktails.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education3 days ago
Labour vows to protect Sure Start-type system from any future Reform assault | Children