Connect with us

AI Insights

AI Researchers Explore Whether Soft Robotics and Embodied Cognition Unlock Artificial General Intelligence

Published

on


IN A NUTSHELL
  • 🤖 Researchers explore whether AI needs a physical body to achieve true intelligence.
  • 🧠 The concept of embodied cognition suggests that sensing, acting, and thinking are interconnected.
  • 🐙 Soft robotics, inspired by creatures like the octopus, offer a new path for developing adaptive AI.
  • 🔄 Autonomous physical intelligence (API) allows materials to self-regulate and make decisions independently.

In the realm of artificial intelligence (AI), the concept of whether machines require physical bodies to achieve true intelligence has long been a topic of debate. Popular culture, from Rosie the robot maid in “The Jetsons” to the empathetic C-3PO in “The Empire Strikes Back,” has offered diverse interpretations of robots and AI. However, these fictional portrayals often overlook the complexities and limitations faced by real-world AI systems. With recent advancements in robotics and AI, researchers are revisiting the question of embodiment in AI, exploring whether a physical form could be essential for achieving artificial general intelligence (AGI). This exploration could redefine our understanding of cognition, intelligence, and the future of AI technology.

The Limits of Disembodied AI

Recent studies have highlighted the shortcomings of disembodied AI systems, particularly in their ability to perform complex tasks. A study from Apple on Large Reasoning Models (LRMs) found that while these systems can outperform standard language models in some scenarios, they struggle significantly with more complex problems. Despite having ample computing power, these models often collapse under complexity, revealing a fundamental flaw in their reasoning capabilities.

Unlike humans, who can reason consistently and algorithmically, these AI models lack internal logic in their “reasoning traces.” Nick Frosst, a former Google researcher, emphasized this discrepancy, noting that current AI systems merely predict the next most likely word rather than truly think like humans. This raises concerns about the viability of disembodied AI in replicating human-like intelligence.

“What we are building now are things that take in words and predict the next most likely word … That’s very different from what you and I do,” Frosst told The New York Times.

The limitations of disembodied AI underscore the need for exploring alternative approaches to achieve true cognitive abilities in machines.

“They Brought Something Back from Space” Dragon Capsule’s 6,700-Pound Cargo Contains Technology That Changes Everything About Earth’s Future

Cognition Is More Than Just Computation

Historically, artificial intelligence was developed under the paradigm of Good Old-Fashioned Artificial Intelligence (GOFAI), which treated cognition as symbolic logic. This approach assumed that intelligence could be built by processing symbols, akin to a computer executing code. However, real-world challenges exposed the limitations of this model, leading researchers to question whether intelligence could be achieved without a physical body.

Research from various disciplines, including psychology and neuroscience, suggests that intelligence is inherently linked to physical interactions with the environment. In humans, the enteric nervous system, often referred to as the “second brain,” operates independently, illustrating that intelligence can be distributed throughout an organism rather than centralized in a brain.

This has led to the concept of embodied cognition, where sensing, acting, and thinking are interconnected processes. As Rolf Pfeifer, Director of the University of Zurich’s Artificial Intelligence Laboratory, pointed out, “Brains have always developed in the context of a body that interacts with the world to survive.” This perspective challenges the traditional view of cognition and suggests that a physical body might be crucial for developing adaptable and intelligent systems.

Embodied Intelligence: A Different Kind of Thinking

The exploration of embodied intelligence has prompted researchers to consider new approaches to AI development. Cecilia Laschi, a pioneer in soft robotics, advocates for the use of soft-bodied machines inspired by organisms like the octopus. These creatures demonstrate a form of intelligence that is distributed throughout their bodies, allowing them to adapt and respond to their environments without centralized control.

“Robots Threaten Jobs,” as Parcel Delivery Costs Plummet 53% with Automation in the US

Laschi argues that smarter AI requires softer, more flexible bodies that can offload perception, control, and decision-making to the physical structure of the robot itself. This approach reduces the computational demands on the main AI system, enabling it to function more effectively in unpredictable environments.

In a May special issue of Science Robotics, Laschi explained that “motor control is not entirely managed by the computing system … motor behavior is partially shaped mechanically by external forces acting on the body.” This suggests that behavior and intelligence are shaped by experience and interaction with the environment, rather than pre-programmed algorithms.

The field of soft robotics, which employs materials like silicone and special fabrics, offers promising possibilities for creating adaptive, real-time learning systems. By integrating flexibility and adaptability into the physical form of AI, researchers are paving the way for machines that can think and learn in ways similar to living organisms.

Flesh and Feedback: How to Make Materials Think for Themselves

The development of soft robotics is also advancing the concept of autonomous physical intelligence (API), where materials themselves exhibit decision-making capabilities. Ximin He, an Associate Professor of Materials Science and Engineering at UCLA, has been at the forefront of this research, designing soft materials that not only react to stimuli but also regulate their movements using built-in feedback.

“We Built A Walking Robot From Just 18 Metal Parts”: Tokyo Engineers Create Open-Source Bipedal Robot That Anyone Can Assemble At Home

He’s approach involves embedding logic directly into the materials, allowing them to sense, act, and decide autonomously. This method contrasts with traditional robotics, which relies on external control systems to analyze sensory data and dictate actions. By incorporating nonlinear feedback mechanisms, soft robots can achieve rhythmic, controlled behaviors without external intervention.

He’s work has demonstrated the potential for soft materials to self-regulate their movements, a significant advancement toward creating lifelike autonomy in machines. This approach opens up new possibilities for AI systems that can adapt and respond to their environments in more natural and intuitive ways.

By integrating sensing, control, and actuation at the material level, researchers are moving closer to developing machines that can independently decide, adapt, and act, paving the way for a new era of intelligent robotics.

As researchers continue to explore the potential of embodied intelligence and soft robotics, the future of AI appears increasingly promising. These innovations could lead to breakthroughs in fields ranging from medicine to environmental exploration, offering machines that are not only intelligent but also capable of understanding and interacting with the world in new ways. However, questions remain about how these technologies will be integrated into society and the ethical implications of creating machines with lifelike autonomy. As we move forward, how will the intersection of AI and physical embodiment redefine our relationship with technology and the world around us?

This article is based on verified sources and supported by editorial technologies.

Did you like it? 4.7/5 (26)



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

AI can predict which patients need treatment to preserve their eyesight

Published

on


Researchers have successfully used artificial intelligence (AI) to predict which patients need treatment to stabilize their corneas and preserve their eyesight, in a study presented today (Sunday) at the 43rd Congress of the European Society of Cataract and Refractive Surgeons (ESCRS).

The research focused on people with keratoconus, a visual impairment that generally develops in teenagers and young adults and tends to worsen into adulthood. It affects up to 1 in 350 people. In some cases, the condition can be managed with contact lenses, but in others it deteriorates quickly and if it is not treated, patients may need a corneal transplant. Currently the only way to tell who needs treatment is to monitor patients over time.

The researchers used AI to assess images of patients’ eyes, combined with other data, and to successfully predict which patients needed prompt treatment and which could continue with monitoring.

The study was by Dr. Shafi Balal and colleagues at Moorfields Eye Hospital NHS Foundation Trust, London, and University College London (UCL), UK. He said: “In people with keratoconus, the cornea – the eye’s front window – bulges outwards. Keratoconus causes visual impairment in young, working-age patients and it is the most common reason for corneal transplantation in the Western world.

“A single treatment called ‘cross-linking’ can halt disease progression. When performed before permanent scarring develops, cross-linking often prevents the need for corneal transplantation. However, doctors cannot currently predict which patients will progress and require treatment, and which will remain stable with monitoring alone. This means patients need frequent monitoring over many years, with cross-linking typically performed after progression has already occurred.”

The study involved a group of patients who were referred to Moorfields Eye Hospital NHS Foundation Trust for keratoconus assessment and monitoring, including scanning the front of the eye with optical coherence tomography (OCT) to examine its shape. Researchers used AI to study 36,673 OCT images of 6,684 different patients along with other patient data.

The AI algorithm could accurately predict whether a patient’s condition would deteriorate or remain stable using images and data from the first visit alone. Using AI, the researchers could sort two-thirds of patients into a low-risk group, who did not need treatment, and the other third into a high-risk group, who needed prompt cross-linking treatment. When information from a second hospital visit was included, the algorithm could successfully categorise up to 90% of patients.

Cross linking treatment uses ultraviolet light and vitamin B2 (riboflavin) drops to stiffen the cornea, and it is successful in more than 95% of cases.

Our research shows that we can use AI to predict which patients need treatment and which can continue with monitoring. This is the first study of its kind to obtain this level of accuracy in predicting the risk of keratoconus progression from a combination of scans and patient data, and it uses a large cohort of patients monitored over two years or more. Although this study is limited to using one specific OCT device, the research methods and AI algorithm used can be applied to other devices. The algorithm will now undergo further safety testing before it is deployed in the clinical setting.


Our results could mean that patients with high-risk keratoconus will be able to receive preventative treatment before their condition progresses. This will prevent vision loss and avoid the need for corneal transplant surgery with its associated complications and recovery burden. Low-risk patients will avoid unnecessary frequent monitoring, freeing up healthcare resources. The effective sorting of patients by the algorithm will allow specialists to be redirected to areas with the greatest need.”


Dr. Shafi Balal, Moorfields Eye Hospital NHS Foundation Trust

The researchers are now developing a more powerful AI algorithm, trained on millions of eye scans, that can be tailored for specific tasks, including predicting keratoconus progression, but also other tasks such as detecting eye infections and inherited eye diseases.

Dr. José Luis Güell, ESCRS Trustee and Head of the Cornea, Cataract and Refractive Surgery Department at the Instituto de Microcirugía Ocular, Barcelona, Spain, who was not involved in the research, said: “Keratoconus is a manageable condition, but knowing who to treat, and when and how to give treatment is challenging. Unfortunately, this problem can lead to delays, with many patients experiencing vision loss and requiring invasive implant or transplant surgery.

“This research suggests that we can use AI to help predict who will progress, even from their first routine consultation, meaning we could treat patients early before progression and secondary changes. Equally, we could reduce unnecessary monitoring of patients whose condition is stable. If it consistently demonstrates its effectiveness, this technology would ultimately prevent vision loss and more difficult management strategies in young, working-age patients.”



Source link

Continue Reading

AI Insights

Billionaire Dan Loeb Just Changed His Mind on This Incredible Artificial Intelligence (AI) Stock

Published

on


After eliminating it from his fund’s portfolio in the first quarter, this stock was one of Loeb’s biggest purchases in the second quarter.

Billionaire Dan Loeb is one of the most-followed activist investors on Wall Street. His hedge fund, Third Point, manages $21.1 billion, with around one-third of that invested in a public equity portfolio.

He is supported by a team of over 60 people, but ultimately, Loeb is in charge of the moves in Third Point’s portfolio. He said that by mid-April, he had sold out of most of the “Magnificent Seven” stocks, taking gains off the table early in 2025 before the market crashed amid tariff concerns.

By the end of the first quarter, he’d sold off significant pieces of his stakes in Microsoft and Amazon while completely eliminating positions in Tesla, Apple, and Meta Platforms (META 0.62%). But Loeb was a buyer of most of those again in the second quarter, including Meta. Here’s why Loeb may have changed his mind on the AI leader.

Image source: Getty Images.

Why did Loeb sell Meta in the first place?

Loeb’s decision to sell Meta shares seemed mostly to have been driven by its rising valuations. Shares of Meta reached a forward P/E ratio of 26.5 during the first quarter.

“We realized gains earlier in the year through opportunistic sales near the highs in Meta,” Loeb said in his first-quarter letter to Third Point investors.

It’s very likely that Loeb was concerned about that valuation as uncertainty grew about President Donald Trump’s trade policies. Meta’s core advertising business relies on business confidence. If businesses aren’t confident in their ability to source their products or in the consumer’s willingness to spend, they’re going to be less willing to pay up for advertising on Meta’s apps.

Meanwhile, Meta is investing heavily in artificial intelligence infrastructure. Management said it plans to spend $60 billion to $65 billion on capital expenditures this year, up from $39 billion in 2024. Given the growing uncertainty about what the near-term returns on those investments might be, Loeb took an opportunity to take some money off the table.

Tiptoeing back in

Third Point ended the second quarter with 150,000 shares of Meta. While that only accounted for about 1.5% of its public equity portfolio at the time, it was still enough to make it one of the hedge fund’s biggest purchases in the quarter.

So, what led to the reversal?

It may have been the strong first-quarter earnings report Meta delivered at the end of April. The company saw strong revenue growth, expanded its operating margin, and expressed a lot of confidence about the next quarter and beyond. It raised its capital expenditure plans as well.

Management also made it clear that Meta’s investments in artificial intelligence are already paying off. That assertion was supported by growth in both ad impressions and average price per ad, which it boosted by consistently improving its content and ad recommendation algorithms. The long-term potential for AI to make it easier for marketers to advertise on Meta’s properties and for it to expand advertising opportunities remains a key focus of the company’s spending.

But Meta shares are once again trading at a high valuation. In fact, the stock now carries a higher earnings multiple than it did when Loeb and his team sold the stock in the first quarter.

Should retail investors buy Meta Platforms now?

Meta’s first-quarter results gave investors like Loeb confidence in the stock, and its second-quarter results were arguably even better.

Revenue growth accelerated, and its operating margin expanded once again. The operating margin gains are perhaps the most impressive facet of the narrative, as management has warned about an increase in depreciation expenses from all of its AI investments.

But those AI investments may be the differentiating factor between Meta and other digital advertising platforms. Meta is able to offer marketers higher returns on their ad spending, even while charging them premium prices. As a result, Meta grew its revenue faster than smaller social media platforms did last quarter.

That should give investors confidence that its AI strategy is already paying off. Combine that with the long-term potential for AI to transform the business, and it makes sense for the stock to trade at a premium price. With shares currently trading at just over 27 times expected forward earnings, it may still be underpriced. We won’t know whether or not Loeb took profits once again until November, when Third Point files its next 13F disclosure with the Securities and Exchange Commission. But for most retail investors, Meta shares are worth buying or holding onto right now.

Adam Levy has positions in Amazon, Apple, Meta Platforms, and Microsoft. The Motley Fool has positions in and recommends Amazon, Apple, Meta Platforms, Microsoft, and Tesla. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading

AI Insights

Prediction: This Artificial Intelligence (AI) Stock Will Beat Opendoor Technologies over the Next 3 Years

Published

on


Opendoor has been on a tear, but this fintech stock looks like a better long-term winner.

Opendoor Technologies (OPEN -13.59%) dazzled investors over the last three months like few other stocks. The online home-flipper jumped an incredible 1,400% over the last three months, going from a little over $0.50 a share to more than $10 at one point.

The rally began with hedge-fund manager Eric Jackson making the case that the stock could be the next Carvana, which jumped to almost 100 times its original price after nearly going bankrupt in 2022. That argument gained steam online and helped turn Opendoor into a meme stock, as it initially surged on high volume and no news.

Since then, the stock gained on real news. That includes the prospect of the Federal Reserve lowering interest rates next week and later in the year, and the company’s board overhauling its management team. In August, embattled CEO Carrie Wheeler stepped down; after hours on Wednesday, Opendoor named Shopify chief operating officer Kaz Nejatian as its new CEO, which sent the stock up 80% on Thursday.

Additionally, the company said that co-founders Keith Rabois and Eric Wu were rejoining the board of directors, and ventures associated with them were investing $40 million into Opendoor. It’s easy to see how that news would inject enthusiasm into the stock, especially after it was on the verge of being delisted by the Nasdaq stock exchange earlier.

However, nothing’s really changed for Opendoor as a business in the last three months. The company never reported a full-year profit, and the business is expected to shrink this quarter due to the weak housing market.

It’s still a high risk with a questionable business model. If you’re looking for a similar stock that can capitalize on falling interest rates, I think that Upstart Holdings (UPST 1.54%) is a better bet, and that it can outperform Opendoor over the next three years.

Image source: Getty Images.

Upstart’s opportunity

Upstart has a number of things in common with Opendoor. Both went public around the same time in 2020, and initially surged out of the gate before plunging in 2022 as interest rates rose and tech stocks crashed.

Upstart is a loan originator. It uses artificial intelligence (AI) technology to screen applicants, producing results it claims are significantly better than traditional FICO scores. Once it creates a loan, it typically sells it to one of its funding partners, so it doesn’t keep the debt on its books.

Like Opendoor’s, Upstart’s business was struggling back in 2022, but the company revamped its business with the help of an improved AI model that increased conversion rates for its loans. Even in a high-interest-rate environment, it’s delivering strong revenue growth. And it’s now profitable based on generally accepted accounting principles (GAAP).

Revenue in the second quarter jumped 102% to $257 million, on a 159% increase in transaction volume. The company reported GAAP net income of $5.6 million, and for the full year, it expects that to be $35 million.

Upstart built its business around consumer loans, but it’s been expanding rapidly into auto and home loans. The home loan market, where it could potentially compete with Opendoor, is massive. In the second quarter, Upstart’s home originations grew nearly 800% from the year-ago quarter to $68 million. That’s still a small fraction of its business, but there’s clearly more growth ahead in the home loan market for Upstart.

Upstart vs. Opendoor

Upstart and Opendoor have similar market caps following Opendoor’s surge. Upstart is valued at $6.1 billion as of Friday, while Opendoor’s market cap is $6.7 billion.

Both companies are also chasing massive addressable markets, and are likely to benefit from lower interest rates.

However, Upstart is the only one of the two that has proven it can grow in a challenging macro environment, and its business now looks set for consistent profitability. At Opendoor, meanwhile, there are real questions about whether home-flipping can scale up as a business model and deliver a consistent profit. Notably, both Zillow Group and Redfin (a subsidiary of Rocket Companies) bowed out of the iBuying competition, finding it too difficult and prone to large losses.

Given those differences, despite the fanfare over Opendoor, Upstart looks like the better bet today. Over the next three years, Upstart looks set to be the winner of the two.

Jeremy Bowman has positions in Carvana, Rocket Companies, Shopify, and Upstart. The Motley Fool has positions in and recommends Shopify, Upstart, and Zillow Group. The Motley Fool recommends Nasdaq and Rocket Companies. The Motley Fool has a disclosure policy.



Source link

Continue Reading

Trending