AI Insights
Washington, D.C., Will Pilot AI at Public Listening Session
Mayor Muriel Bowser’s administration has teamed up with the Office of the Chief Technology Officer (OCTO) to launch a new AI pilot focused on improving community engagement in Washington, D.C.
Bowser has taken action to advance value-driven AI adoption in the district, and the newly launched pilot is not the first AI-powered tool her government has implemented.
Now, the district is working with the Massachusetts Institute of Technology Governance Lab (MIT GOV/LAB) and the Stanford Digital Economy Lab (Stanford DEL) to use a new AI tool from these partners known as deliberation.io. The tool is intended to support large-scale open dialog between residents and help the district gather feedback.
Specifically, the district will be using the tool to gather reactions around its forthcoming AI Public Listening Session July 15.
During this session, residents are invited to learn about the government’s efforts on AI, hear from technology leaders, share public input, and identify solutions that support public interests.
The listening sessions, led by the OCTO, are a result of the mayor’s February 2024 order. Previous sessions focused on initial AI pilots and AI for education services, Chief Technology Officer Stephen Miller said, but the next will address issues including language access, accessibility, communication and usability.
“Now, we’re starting to look at how we’re using technology for things like accessibility, or how we’re using it to make the government more efficient or more effective,” Miller said.
OCTO, Stanford DEL and MIT GOV/LAB will work together during the pilot to ensure topics that are discussed are relevant to district residents. The partners will collect data, conduct an analysis and share a report when the pilot is complete.
The Bowser administration plans to make the deliberation.io tool more widely available pending the pilot’s results.
AI Insights
Real or AI: Band confirms use of artificial intelligence for its music on Spotify
The Velvet Sundown, a four-person band, or so it seems, has garnered a lot of attention on Spotify. It started posting music on the platform in early June and has since released two full albums with a few more singles and another album coming soon. Naturally, listeners started to accuse the band of being an AI-generated project, which as it now turns out, is true.
The band or music project called The Velvet Sundown has over a million monthly listeners on Spotify. That’s an impressive debut considering their first album called “Floating on Echoes” hit the music streaming platform on June 4. Then, on June 19, their second album called “Dust and Silence” was added to the library. Next week, July 14, will mark the release of the third album called “Paper Sun Rebellion.” Since their debut, listeners have accused the band of being an AI-generated project and now, the owners of the project have updated the Spotify bio and called it a “synthetic music project guided by human creative direction, and composed, voiced, and visualized with the support of artificial intelligence.”
It goes on to state that this project challenges the boundaries of “authorship, identity, and the future of music itself in the age of AI.” The owners claim that the characters, stories, music, voices, and lyrics are “original creations generated with the assistance of artificial intelligence tools,” but it is unclear to what extent AI was involved in the development process.
The band art shows four individuals suggesting they are owners of the project, but the images are likely AI-generated as well. Interestingly, Andrew Frelon (pseudonym) claimed to be the owner of the AI band initially, but then confirmed that was untrue and that he pretended to run their Twitter because he wanted to insert an “extra layer of weird into this story,” of this AI band.
As it stands now, The Velvet Sundown’s music is available on Spotify with the new album releasing next week. Now, whether this unveiling causes a spike or a decline in monthly listeners, remains to be seen.
I have always been passionate about gaming and technology, which drove me towards pursuing a career in the tech writing industry. I have spent over 7 years in the tech space and about a decade in content writing. I hope to continue to use this passion and generate informative, entertaining, and accurate content for readers.
AI Insights
How to Choose Between Deploying an AI Chatbot or Agent
In artificial intelligence, the trend du jour is AI agents, or algorithmic bots that can autonomously retrieve data and act on it.
AI Insights
Do AI systems socially interact the same way as living beings?
Key takeaways
- A new study that compares biological brains with artificial intelligence systems analyzed the neural network patterns that emerged during social and non-social tasks in mice and programmed artificial intelligence agents.
- UCLA researchers identified high-dimensional “shared” and “unique” neural subspaces when mice interact socially, as well as when AI agents engaged in social behaviors.
- Findings could help advance understanding of human social disorders and develop AI that can understand and engage in social interactions.
As AI systems are increasingly integrated into from virtual assistants and customer service agents to counseling and AI companions, an understanding of social neural dynamics is essential for both scientific and technological progress. A new study from UCLA researchers shows biological brains and AI systems develop remarkably similar neural patterns during social interaction.
The study, recently published in the journal Nature, reveals that when mice interact socially, specific brain cell types create synchronize in “shared neural spaces,” and artificial intelligence agents develop analogous patterns when engaging in social behaviors.
The new research represents a striking convergence of neuroscience and artificial intelligence, two of today’s most rapidly advancing fields. By directly comparing how biological brains and AI systems process social information, scientists can now better understand fundamental principles that govern social cognition across different types of intelligent systems. The findings could advance understanding of social disorders like autism while simultaneously informing the development of more sophisticated, socially aware AI systems.
This work was supported in part by , the National Science Foundation, the Packard Foundation, Vallee Foundation, Mallinckrodt Foundation and the Brain and Behavior Research Foundation.
Examining AI agents’ social behavior
A multidisciplinary team from UCLA’s departments of neurobiology, biological chemistry, bioengineering, electrical and computer engineering, and computer science across the David Geffen School of Medicine and UCLA Samueli School of Engineering used advanced brain imaging techniques to record activity from molecularly defined neurons in the dorsomedial prefrontal cortex of mice during social interactions. The researchers developed a novel computational framework to identify high-dimensional “shared” and “unique” neural subspaces across interacting individuals. The team then trained artificial intelligence agents to interact socially and applied the same analytical framework to examine neural network patterns in AI systems that emerged during social versus non-social tasks.
The research revealed striking parallels between biological and artificial systems during social interaction. In both mice and AI systems, neural activity could be partitioned into two distinct components: a “shared neural subspace” containing synchronized patterns between interacting entities, and a “unique neural subspace” containing activity specific to each individual.
Remarkably, GABAergic neurons — inhibitory brain cells that regulate neural activity —showed significantly larger shared neural spaces compared with glutamatergic neurons, which are the brain’s primary excitatory cells. This represents the first investigation of inter-brain neural dynamics in molecularly defined cell types, revealing previously unknown differences in how specific neuron types contribute to social synchronization.
When the same analytical framework was applied to AI agents, shared neural dynamics emerged as the artificial systems developed social interaction capabilities. Most importantly, when researchers selectively disrupted these shared neural components in artificial systems, social behaviors were substantially reduced, providing the direct evidence that synchronized neural patterns causally drive social interactions.
The study also revealed that shared neural dynamics don’t simply reflect coordinated behaviors between individuals, but emerge from representations of each other’s unique behavioral actions during social interaction.
“This discovery fundamentally changes how we think about social behavior across all intelligent systems,” said Weizhe Hong, professor of neurobiology, biological chemistry and bioengineering at UCLA and lead author of the new work. “We’ve shown for the first time that the neural mechanisms driving social interaction are remarkably similar between biological brains and artificial intelligence systems. This suggests we’ve identified a fundamental principle of how any intelligent system — whether biological or artificial — processes social information. The implications are significant for both understanding human social disorders and developing AI that can truly understand and engage in social interactions.”
Continuing research for treating social disorders and training AI
The research team plans to further investigate shared neural dynamics in different and potentially more complex social interactions. They also aim to explore how disruptions in shared neural space might contribute to social disorders and whether therapeutic interventions could restore healthy patterns of inter-brain synchronization. The artificial intelligence framework may serve as a platform for testing hypotheses about social neural mechanisms that are difficult to examine directly in biological systems. They also aim to develop methods to train socially intelligent AI.
The study was led by UCLA’s Hong and Jonathan Kao, associate professor of electrical and computer engineering. Co-first authors Xingjian Zhang and Nguyen Phi, along with collaborators Qin Li, Ryan Gorzek, Niklas Zwingenberger, Shan Huang, John Zhou, Lyle Kingsbury, Tara Raam, Ye Emily Wu and Don Wei contributed to the research.
-
Funding & Business7 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers7 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions7 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers6 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Funding & Business7 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Tools & Platforms6 days ago
Winning with AI – A Playbook for Pest Control Business Leaders to Drive Growth
-
Jobs & Careers6 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure