AI Insights
Do AI systems socially interact the same way as living beings?
Key takeaways
- A new study that compares biological brains with artificial intelligence systems analyzed the neural network patterns that emerged during social and non-social tasks in mice and programmed artificial intelligence agents.
- UCLA researchers identified high-dimensional “shared” and “unique” neural subspaces when mice interact socially, as well as when AI agents engaged in social behaviors.
- Findings could help advance understanding of human social disorders and develop AI that can understand and engage in social interactions.
As AI systems are increasingly integrated into from virtual assistants and customer service agents to counseling and AI companions, an understanding of social neural dynamics is essential for both scientific and technological progress. A new study from UCLA researchers shows biological brains and AI systems develop remarkably similar neural patterns during social interaction.
The study, recently published in the journal Nature, reveals that when mice interact socially, specific brain cell types create synchronize in “shared neural spaces,” and artificial intelligence agents develop analogous patterns when engaging in social behaviors.
The new research represents a striking convergence of neuroscience and artificial intelligence, two of today’s most rapidly advancing fields. By directly comparing how biological brains and AI systems process social information, scientists can now better understand fundamental principles that govern social cognition across different types of intelligent systems. The findings could advance understanding of social disorders like autism while simultaneously informing the development of more sophisticated, socially aware AI systems.
This work was supported in part by , the National Science Foundation, the Packard Foundation, Vallee Foundation, Mallinckrodt Foundation and the Brain and Behavior Research Foundation.
Examining AI agents’ social behavior
A multidisciplinary team from UCLA’s departments of neurobiology, biological chemistry, bioengineering, electrical and computer engineering, and computer science across the David Geffen School of Medicine and UCLA Samueli School of Engineering used advanced brain imaging techniques to record activity from molecularly defined neurons in the dorsomedial prefrontal cortex of mice during social interactions. The researchers developed a novel computational framework to identify high-dimensional “shared” and “unique” neural subspaces across interacting individuals. The team then trained artificial intelligence agents to interact socially and applied the same analytical framework to examine neural network patterns in AI systems that emerged during social versus non-social tasks.
The research revealed striking parallels between biological and artificial systems during social interaction. In both mice and AI systems, neural activity could be partitioned into two distinct components: a “shared neural subspace” containing synchronized patterns between interacting entities, and a “unique neural subspace” containing activity specific to each individual.
Remarkably, GABAergic neurons — inhibitory brain cells that regulate neural activity —showed significantly larger shared neural spaces compared with glutamatergic neurons, which are the brain’s primary excitatory cells. This represents the first investigation of inter-brain neural dynamics in molecularly defined cell types, revealing previously unknown differences in how specific neuron types contribute to social synchronization.
When the same analytical framework was applied to AI agents, shared neural dynamics emerged as the artificial systems developed social interaction capabilities. Most importantly, when researchers selectively disrupted these shared neural components in artificial systems, social behaviors were substantially reduced, providing the direct evidence that synchronized neural patterns causally drive social interactions.
The study also revealed that shared neural dynamics don’t simply reflect coordinated behaviors between individuals, but emerge from representations of each other’s unique behavioral actions during social interaction.
“This discovery fundamentally changes how we think about social behavior across all intelligent systems,” said Weizhe Hong, professor of neurobiology, biological chemistry and bioengineering at UCLA and lead author of the new work. “We’ve shown for the first time that the neural mechanisms driving social interaction are remarkably similar between biological brains and artificial intelligence systems. This suggests we’ve identified a fundamental principle of how any intelligent system — whether biological or artificial — processes social information. The implications are significant for both understanding human social disorders and developing AI that can truly understand and engage in social interactions.”
Continuing research for treating social disorders and training AI
The research team plans to further investigate shared neural dynamics in different and potentially more complex social interactions. They also aim to explore how disruptions in shared neural space might contribute to social disorders and whether therapeutic interventions could restore healthy patterns of inter-brain synchronization. The artificial intelligence framework may serve as a platform for testing hypotheses about social neural mechanisms that are difficult to examine directly in biological systems. They also aim to develop methods to train socially intelligent AI.
The study was led by UCLA’s Hong and Jonathan Kao, associate professor of electrical and computer engineering. Co-first authors Xingjian Zhang and Nguyen Phi, along with collaborators Qin Li, Ryan Gorzek, Niklas Zwingenberger, Shan Huang, John Zhou, Lyle Kingsbury, Tara Raam, Ye Emily Wu and Don Wei contributed to the research.
AI Insights
Intro robotics students build AI-powered robot dogs from scratch
Equipped with a starter robot hardware kit and cutting-edge lessons in artificial intelligence, students in CS 123: A Hands-On Introduction to Building AI-Enabled Robots are mastering the full spectrum of robotics – from motor control to machine learning. Now in its third year, the course has students build and enhance an adorable quadruped robot, Pupper, programming it to walk, navigate, respond to human commands, and perform a specialized task that they showcase in their final presentations.
The course, which evolved from an independent study project led by Stanford’s robotics club, is now taught by Karen Liu, professor of computer science in the School of Engineering, in addition to Jie Tan from Google DeepMind and Stuart Bowers from Apple and Hands-On Robotics. Throughout the 10-week course, students delve into core robotics concepts, such as movement and motor control, while connecting them to advanced AI topics.
“We believe that the best way to help and inspire students to become robotics experts is to have them build a robot from scratch,” Liu said. “That’s why we use this specific quadruped design. It’s the perfect introductory platform for beginners to dive into robotics, yet powerful enough to support the development of cutting-edge AI algorithms.”
What makes the course especially approachable is its low barrier to entry – students need only basic programming skills to get started. From there, the students build up the knowledge and confidence to tackle complex robotics and AI challenges.
Robot creation goes mainstream
Pupper evolved from Doggo, built by the Stanford Student Robotics club to offer people a way to create and design a four-legged robot on a budget. When the team saw the cute quadruped’s potential to make robotics both approachable and fun, they pitched the idea to Bowers, hoping to turn their passion project into a hands-on course for future roboticists.
“We wanted students who were still early enough in their education to explore and experience what we felt like the future of AI robotics was going to be,” Bowers said.
This current version of Pupper is more powerful and refined than its predecessors. It’s also irresistibly adorable and easier than ever for students to build and interact with.
“We’ve come a long way in making the hardware better and more capable,” said Ankush Kundan Dhawan, one of the first students to take the Pupper course in the fall of 2021 before becoming its head teaching assistant. “What really stuck with me was the passion that instructors had to help students get hands-on with real robots. That kind of dedication is very powerful.”
Code come to life
Building a Pupper from a starter hardware kit blends different types of engineering, including electrical work, hardware construction, coding, and machine learning. Some students even produced custom parts for their final Pupper projects. The course pairs weekly lectures with hands-on labs. Lab titles like Wiggle Your Big Toe and Do What I Say keep things playful while building real skills.
CS 123 students ready to show off their Pupper’s tricks. | Harry Gregory
Over the initial five weeks, students are taught the basics of robotics, including how motors work and how robots can move. In the next phase of the course, students add a layer of sophistication with AI. Using neural networks to improve how the robot walks, sees, and responds to the environment, they get a glimpse of state-of-the-art robotics in action. Many students also use AI in other ways for their final projects.
“We want them to actually train a neural network and control it,” Bowers said. “We want to see this code come to life.”
By the end of the quarter this spring, students were ready for their capstone project, called the “Dog and Pony Show,” where guests from NVIDIA and Google were present. Six teams had Pupper perform creative tasks – including navigating a maze and fighting a (pretend) fire with a water pick – surrounded by the best minds in the industry.
“At this point, students know all the essential foundations – locomotion, computer vision, language – and they can start combining them and developing state-of-the-art physical intelligence on Pupper,” Liu said.
“This course gives them an overview of all the key pieces,” said Tan. “By the end of the quarter, the Pupper that each student team builds and programs from scratch mirrors the technology used by cutting-edge research labs and industry teams today.”
All ready for the robotics boom
The instructors believe the field of AI robotics is still gaining momentum, and they’ve made sure the course stays current by integrating new lessons and technology advances nearly every quarter.
This Pupper was mounted with a small water jet to put out a pretend fire. | Harry Gregory
Students have responded to the course with resounding enthusiasm and the instructors expect interest in robotics – at Stanford and in general – will continue to grow. They hope to be able to expand the course, and that the community they’ve fostered through CS 123 can contribute to this engaging and important discipline.
“The hope is that many CS 123 students will be inspired to become future innovators and leaders in this exciting, ever-changing field,” said Tan.
“We strongly believe that now is the time to make the integration of AI and robotics accessible to more students,” Bowers said. “And that effort starts here at Stanford and we hope to see it grow beyond campus, too.”
AI Insights
Why Infuse Asset Management’s Q2 2025 Letter Signals a Shift to Artificial Intelligence and Cybersecurity Plays
The rapid evolution of artificial intelligence (AI) and the escalating complexity of cybersecurity threats have positioned these sectors as the next frontier of investment opportunity. Infuse Asset Management’s Q2 2025 letter underscores this shift, emphasizing AI’s transformative potential and the urgent need for robust cybersecurity infrastructure to mitigate risks. Below, we dissect the macroeconomic forces, sector-specific tailwinds, and portfolio reallocation strategies investors should consider in this new paradigm.
The AI Uprising: Macro Drivers of a Paradigm Shift
The AI revolution is accelerating at a pace that dwarfs historical technological booms. Take ChatGPT, which reached 800 million weekly active users by April 2025—a milestone achieved in just two years. This breakneck adoption is straining existing cybersecurity frameworks, creating a critical gap between innovation and defense.
Meanwhile, the U.S.-China AI rivalry is fueling a global arms race. China’s industrial robot installations surged from 50,000 in 2014 to 290,000 in 2023, outpacing U.S. adoption. This competition isn’t just about economic dominance—it’s a geopolitical chess match where data sovereignty, espionage, and AI-driven cyberattacks now loom large. The concept of “Mutually Assured AI Malfunction (MAIM)” highlights how even a single vulnerability could destabilize critical systems, much like nuclear deterrence but with far less predictability.
Cybersecurity: The New Infrastructure for an AI World
As AI systems expand into physical domains—think autonomous taxis or industrial robots—so do their vulnerabilities. In San Francisco, autonomous taxi providers now command 27% market share, yet their software is a prime target for cyberattacks. The decline in AI inference costs (outpacing historical declines in electricity and memory) has made it cheaper to deploy AI, but it also lowers the barrier for malicious actors to weaponize it.
Tech giants are pouring capital into AI infrastructure—NVIDIA and Microsoft alone increased CapEx from $33 billion to $212 billion between 2014 and 2024. This influx creates a vast, interconnected attack surface. Investors should prioritize cybersecurity firms that specialize in quantum-resistant encryption, AI-driven threat detection, and real-time infrastructure protection.
The Human Element: Skills Gaps and Strategic Shifts
The demand for AI expertise is soaring, but the workforce is struggling to keep pace. U.S. AI-related IT job postings have surged 448% since 2018, while non-AI IT roles have declined by 9%. This bifurcation signals two realities:
1. Cybersecurity skills are now mission-critical for safeguarding AI systems.
2. Ethical AI development and governance are emerging as compliance priorities, particularly in regulated industries.
The data will likely show a stark divergence, reinforcing the need for investors to back training platforms and cybersecurity firms bridging this skills gap.
Portfolio Reallocation: Where to Deploy Capital
Infuse’s insights suggest three actionable strategies:
-
Core Holdings in Cybersecurity Leaders:
Target firms like CrowdStrike (CRWD) and Palo Alto Networks (PANW), which excel in AI-powered threat detection and endpoint security. -
Geopolitical Plays:
Invest in companies addressing data sovereignty and cross-border compliance, such as Palantir (PLTR) or Cloudflare (NET), which offer hybrid cloud solutions. -
Emerging Sectors:
Look to quantum computing security (e.g., Rigetti Computing (RGTI)) and AI governance platforms like DataRobot (NASDAQ: MGNI), which help enterprises audit and validate AI models.
The Bottom Line: AI’s Growth Requires a Security Foundation
The “productivity paradox” of AI—where speculative valuations outstrip tangible ROI—is real. Yet, cybersecurity is one area where returns are measurable: breaches cost companies millions, and defenses reduce risk. Investors should treat cybersecurity as the bedrock of their AI investments.
As Infuse’s letter implies, the next decade will belong to those who balance AI’s promise with ironclad security. Position portfolios accordingly.
JR Research
AI Insights
5 Ways CFOs Can Upskill Their Staff in AI to Stay Competitive
Chief financial officers are recognizing the need to upskill their workforce to ensure their teams can effectively harness artificial intelligence (AI).
-
Funding & Business7 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers7 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions7 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers6 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Tools & Platforms6 days ago
Winning with AI – A Playbook for Pest Control Business Leaders to Drive Growth
-
Jobs & Careers4 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit