Connect with us

AI Insights

Welcome MeriTalk’s 2025 AI Honors Award Winners! – MeriTalk

Published

on


Artificial intelligence – whether classical, generative, agentic, and wherever the newest models take us next – has become the dominant force behind improving government technology, network security, mission success, and citizen service delivery.

And driving that wave forward is the latest generation of AI practitioners, developers, and visionary thinkers who are leading the way in tapping into the technology’s potential to benefit us all.

That’s why MeriTalk is delighted to honor the 2025 class of AI Honors Award Winners – the 30 women and men working across government and industry right now to bring AI to bear in shaping the ongoing revolution in government IT service.

Each of the 2025 AI Honors Award winners was nominated by their peers for outstanding work in putting AI tech to work for government missions. A few of them are familiar to many of us, but most are the fresh talent emerging into the technology limelight.

“This year’s honorees are turning the buzz of AI into real-world progress across government,” said Caroline Boyd, principal, government programs at MeriTalk. “They’re redefining what’s possible and we’re proud to spotlight their work in driving innovation and impact.”

Please join us at Tech Tonic on July 17 from 5 p.m. to 9 p.m. at Morton’s the Steakhouse in Washington, D.C. to celebrate the winners who will receive their awards in person. Drop us an RSVP today and join in the celebration at the Happiest Hour in Govt IT.

Here are the 30 AI Honors Award winners for 2025:

Government:

Togai Andrews, Chief Information Security Officer, Bureau of Engraving and Printing;

Taka Ariga, former Chief Artificial Intelligence Officer and Chief Data Officer, Office of Personnel Management;

Dean Ball, Senior Policy Advisor, Artificial Intelligence and Emerging Technology, White House Office of Science and Technology Policy;

Gabe Chiulli, Chief Technology Officer, U.S. Army Enterprise Cloud Management Agency;

Susan Davenport, Chief Data Officer and Chief Artificial Intelligence Officer, U.S. Air Force;

Leonel Garciga, Chief Information Officer, U.S. Army;

J. Matt Gilkeson, Chief Technology Officer, Chief Data Officer, and Artificial Intelligence Officer for Information Technology, Transportation Security Administration;

Mike Horton, Acting Chief Artificial Intelligence Officer, Department of Transportation;

Lt. Col. Chuck Kubik, GigEagle Strategy and Product Lead, U.S. Air Force;

Douglas Matty, Chief Digital and Artificial Intelligence Officer, Department of Defense;

Matheus Passos, Chief Architect and Responsible Artificial Intelligence Official, Department of Commerce;

Lakshmi Raman, Chief Artificial Intelligence Officer, Central Intelligence Agency;

Dr. Reza Rashidi, Acting Chief Data and Analytics Officer, Internal Revenue Service;

Nael Samha, Executive Director, Targeting and Analysis Systems Program Directorate, U.S. Customs and Border Protection;

Thomas Shedd, Director, Technology Transformation Services, and Deputy Commissioner, Federal Acquisition Service, General Services Administration; Department of Labor;

Zach Whitman, Chief Artificial Intelligence Officer and Chief Data Scientist, General Services Administration; and

Morgan Zimmerman, Artificial Intelligence Policy Analyst, Office of Management and Budget, Office of the Federal Chief Information Officer.

Industry:

Jonathan Alboum, Federal Chief Technology Officer, ServiceNow;

Nicolas Chaillan, Chief Executive Officer and Founder, Ask Sage;

Brandy Durham, Vice President, Data and Artificial Intelligence Practice, ManTech;

John Dvorak, Chief Technology Officer, Public Sector, Red Hat;

Burnie Legette, Solution Architect, Artificial Intelligence and Data Operationalization, Intel Corporation;

Amanda Levay, Chief Executive Officer and Founder, Redactable;

Krishna Narayanaswamy, Chief Technology Officer and Co-Founder, Netskope;

Vimesh Patel, Chief Technology Advisor, Federal, World Wide Technology;

Bill Rowan, Vice President, Public Sector, Splunk, a Cisco Company;

Ryan Simpson, Chief Technologist, Public Sector, NVIDIA;

Josh Slattery, Vice President, Technology Sales, Vertosoft;

Chris “CT” Thomas, Technical Director, Global Defense, Artificial Intelligence, and Data Systems, Dell Technologies; and

Chris Townsend, Global Vice President, Public Sector, Elastic.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Real or AI: Band confirms use of artificial intelligence for its music on Spotify

Published

on


The Velvet Sundown, a four-person band, or so it seems, has garnered a lot of attention on Spotify. It started posting music on the platform in early June and has since released two full albums with a few more singles and another album coming soon. Naturally, listeners started to accuse the band of being an AI-generated project, which as it now turns out, is true.

The band or music project called The Velvet Sundown has over a million monthly listeners on Spotify. That’s an impressive debut considering their first album called “Floating on Echoes” hit the music streaming platform on June 4. Then, on June 19, their second album called “Dust and Silence” was added to the library. Next week, July 14, will mark the release of the third album called “Paper Sun Rebellion.” Since their debut, listeners have accused the band of being an AI-generated project and now, the owners of the project have updated the Spotify bio and called it a “synthetic music project guided by human creative direction, and composed, voiced, and visualized with the support of artificial intelligence.”

It goes on to state that this project challenges the boundaries of “authorship, identity, and the future of music itself in the age of AI.” The owners claim that the characters, stories, music, voices, and lyrics are “original creations generated with the assistance of artificial intelligence tools,” but it is unclear to what extent AI was involved in the development process.

The band art shows four individuals suggesting they are owners of the project, but the images are likely AI-generated as well. Interestingly, Andrew Frelon (pseudonym) claimed to be the owner of the AI band initially, but then confirmed that was untrue and that he pretended to run their Twitter because he wanted to insert an “extra layer of weird into this story,” of this AI band.

As it stands now, The Velvet Sundown’s music is available on Spotify with the new album releasing next week. Now, whether this unveiling causes a spike or a decline in monthly listeners, remains to be seen. 



Source link

Continue Reading

AI Insights

How to Choose Between Deploying an AI Chatbot or Agent

Published

on


In artificial intelligence, the trend du jour is AI agents, or algorithmic bots that can autonomously retrieve data and act on it.

But how are AI agents different from AI chatbots, and why should businesses care?

Understanding how they differ can help businesses choose the right solution for the right job and avoid underusing or overcomplicating their AI investments.

An AI chatbot or assistant is a program that uses natural language processing to interact with users in a conversational way. Think of ChatGPT. It can answer questions, guide users and simulate dialogue.

Chatbots only react to prompts. They don’t act on their own or carry out multistep goals. They are helpful and conversational but ultimately limited to what they’re asked.

An AI agent goes a step further. Like a chatbot, it can understand natural language and interact conversationally. But it also has autonomy and can complete tasks. It is proactive.

Instead of just replying, an AI agent can make decisions, take actions across systems, plan and carry out multistep processes, and learn from past interactions or external data.

For example, imagine a travel platform. An AI chatbot might help a user plan their travel itinerary. An AI agent, on the other hand, could do more, such as:

  • Understand the request, such as booking a flight to Los Angeles.
  • Search multiple airline sites.
  • Compare flight options based on user preferences.
  • Book the flight.
  • Send a confirmation email.

All of this could happen without the user needing to click through a series of links or speak to a human agent. AI agents can be embedded in customer service, HR systems, sales platforms and the like.

Read also: Understanding the Difference Between AI Training and Inference

Why Businesses Should Care

Knowing the difference can help a business plan more strategically. AI chatbots use less inference than AI agents and therefore are more cost-effective. Moreover, businesses can use AI chatbots and AI agents for very different outcomes.

AI chatbot use cases include the following:

  • Customer service
  • Data retrieval
  • Planning and analysis
  • Basic IT support
  • Conversation
  • Writing documents
  • Code generation

AI agent use cases include the following:

  • Automated checkout
  • Automated content curation
  • Travel and reservation execution tasks
  • Shopping and payment processing

AI chatbots and AI agents both use natural language and large language models, but their functions are different. Chatbots are answer machines while agents are action bots.

For businesses looking to improve how they serve customers, streamline operations or support employees, AI agents offer a new level of power and flexibility. Knowing when and how to use each tool can help companies make smarter AI investments.

To choose between deploying an AI chatbot or AI agent, consider the following:

  • Budgets: AI chatbots are cheaper to run since they use less inference.
  • Complexity of use case: For straightforward tasks, use a chatbot. For tasks that need multistep coordination, use an AI agent.
  • Skilled talent: Assess the IT team’s ability to handle chatbots versus agents. Chatbots are easier to deploy and update. AI agents require more advanced machine learning, natural language processing and other skills.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read more:



Source link

Continue Reading

AI Insights

Do AI systems socially interact the same way as living beings?

Published

on


Key takeaways

  • A new study that compares biological brains with artificial intelligence systems analyzed the neural network patterns that emerged during social and non-social tasks in mice and programmed artificial intelligence agents.
  • UCLA researchers identified high-dimensional “shared” and “unique” neural subspaces when mice interact socially, as well as when AI agents engaged in social behaviors.
  • Findings could help advance understanding of human social disorders and develop AI that can understand and engage in social interactions.

As AI systems are increasingly integrated into from virtual assistants and customer service agents to counseling and AI companions, an understanding of social neural dynamics is essential for both scientific and technological progress. A new study from UCLA researchers shows biological brains and AI systems develop remarkably similar neural patterns during social interaction.

The study, recently published in the journal Nature, reveals that when mice interact socially, specific brain cell types create synchronize in “shared neural spaces,” and artificial intelligence agents develop analogous patterns when engaging in social behaviors.     

The new research represents a striking convergence of neuroscience and artificial intelligence, two of today’s most rapidly advancing fields. By directly comparing how biological brains and AI systems process social information, scientists can now better understand fundamental principles that govern social cognition across different types of intelligent systems. The findings could advance understanding of social disorders like autism while simultaneously informing the development of more sophisticated, socially  aware AI systems.  

This work was supported in part by , the National Science Foundation, the Packard Foundation, Vallee Foundation, Mallinckrodt Foundation and the Brain and Behavior Research Foundation.

Examining AI agents’ social behavior

A multidisciplinary team from UCLA’s departments of neurobiology, biological chemistry, bioengineering, electrical and computer engineering, and computer science across the David Geffen School of Medicine and UCLA Samueli School of Engineering used advanced brain imaging techniques to record activity from molecularly defined neurons in the dorsomedial prefrontal cortex of mice during social interactions. The researchers developed a novel computational framework to identify high-dimensional “shared” and “unique” neural subspaces across interacting individuals. The team then trained artificial intelligence agents to interact socially and applied the same analytical framework to examine neural network patterns in AI systems that emerged during social versus non-social tasks.

The research revealed striking parallels between biological and artificial systems during social interaction. In both mice and AI systems, neural activity could be partitioned into two distinct components: a “shared neural subspace” containing synchronized patterns between interacting entities, and a “unique neural subspace” containing activity specific to each individual.

Remarkably, GABAergic neurons — inhibitory brain cells that regulate neural activity —showed significantly larger shared neural spaces compared with glutamatergic neurons, which are the brain’s primary excitatory cells. This represents the first investigation of inter-brain neural dynamics in molecularly defined cell types, revealing previously unknown differences in how specific neuron types contribute to social synchronization.

When the same analytical framework was applied to AI agents, shared neural dynamics emerged as the artificial systems developed social interaction capabilities. Most importantly, when researchers selectively disrupted these shared neural components in artificial systems, social behaviors were substantially reduced, providing the direct evidence that synchronized neural patterns causally drive social interactions.

The study also revealed that shared neural dynamics don’t simply reflect coordinated behaviors between individuals, but emerge from representations of each other’s unique behavioral actions during social interaction.

“This discovery fundamentally changes how we think about social behavior across all intelligent systems,” said Weizhe Hong, professor of neurobiology, biological chemistry and bioengineering at UCLA and lead author of the new work. “We’ve shown for the first time that the neural mechanisms driving social interaction are remarkably similar between biological brains and artificial intelligence systems. This suggests we’ve identified a fundamental principle of how any intelligent system — whether biological or artificial — processes social information. The implications are significant for both understanding human social disorders and developing AI that can truly understand and engage in social interactions.”

Continuing research for treating social disorders and training AI

The research team plans to further investigate shared neural dynamics in different and potentially more complex social interactions. They also aim to explore how disruptions in shared neural space might contribute to social disorders and whether therapeutic interventions could restore healthy patterns of inter-brain synchronization. The artificial intelligence framework may serve as a platform for testing hypotheses about social neural mechanisms that are difficult to examine directly in biological systems. They also aim to develop methods to train socially intelligent AI.

The study was led by UCLA’s Hong and Jonathan Kao, associate professor of electrical and computer engineering. Co-first authors Xingjian Zhang and Nguyen Phi, along with collaborators Qin Li, Ryan Gorzek, Niklas Zwingenberger, Shan Huang, John Zhou, Lyle Kingsbury, Tara Raam, Ye Emily Wu and Don Wei contributed to the research.



Source link

Continue Reading

Trending