Scams involve ‘AI-generated voice messages impersonating senior officials and prominent public figures to steal money and information,’ police say
NEWS RELEASE
ONTARIO PROVINCIAL POLICE
*************************
Members of the Ontario Provincial Police (OPP) and the Canadian Anti-Fraud Centre (CAFC) are continuing to raise the awareness for north Simcoe residents of the various scams that they may encounter on the telephone or online.
Cyber security officials in the Government of Canada are warning Canadians about a spike in malicious cyber activity, where threat actors are using text and AI-generated voice messages impersonating senior officials and prominent public figures to steal money and information.
Canadian authorities have become aware of a malicious cyber campaign targeting business executives and senior public officials. A threat actor is sending malicious links or urgent financial requests using messaging accounts and voice calls that claim to be from senior government officials. In some cases, they are using AI to mimic the officials’ voices to make the calls more convincing.
The Canadian Centre for Cyber Security, a part of the Communications Security Establishment Canada, and its partners have been tracking and monitoring how AI is improving the personalization and persuasiveness of social engineering attacks worldwide for months. The FBI also alerted the public to this threat in April 2025. Canadian officials have recently become aware of similar tactics targeting Canadians in a related or linked campaign.
A new study that compares biological brains with artificial intelligence systems analyzed the neural network patterns that emerged during social and non-social tasks in mice and programmed artificial intelligence agents.
UCLA researchers identified high-dimensional “shared” and “unique” neural subspaces when mice interact socially, as well as when AI agents engaged in social behaviors.
Findings could help advance understanding of human social disorders and develop AI that can understand and engage in social interactions.
As AI systems are increasingly integrated into from virtual assistants and customer service agents to counseling and AI companions, an understanding of social neural dynamics is essential for both scientific and technological progress. A new study from UCLA researchers shows biological brains and AI systems develop remarkably similar neural patterns during social interaction.
The study, recently published in the journal Nature, reveals that when mice interact socially, specific brain cell types create synchronize in “shared neural spaces,” and artificial intelligence agents develop analogous patterns when engaging in social behaviors.
The new research represents a striking convergence of neuroscience and artificial intelligence, two of today’s most rapidly advancing fields. By directly comparing how biological brains and AI systems process social information, scientists can now better understand fundamental principles that govern social cognition across different types of intelligent systems. The findings could advance understanding of social disorders like autism while simultaneously informing the development of more sophisticated, socially aware AI systems.
This work was supported in part by , the National Science Foundation, the Packard Foundation, Vallee Foundation, Mallinckrodt Foundation and the Brain and Behavior Research Foundation.
Examining AI agents’ social behavior
A multidisciplinary team from UCLA’s departments of neurobiology, biological chemistry, bioengineering, electrical and computer engineering, and computer science across the David Geffen School of Medicine and UCLA Samueli School of Engineering used advanced brain imaging techniques to record activity from molecularly defined neurons in the dorsomedial prefrontal cortex of mice during social interactions. The researchers developed a novel computational framework to identify high-dimensional “shared” and “unique” neural subspaces across interacting individuals. The team then trained artificial intelligence agents to interact socially and applied the same analytical framework to examine neural network patterns in AI systems that emerged during social versus non-social tasks.
The research revealed striking parallels between biological and artificial systems during social interaction. In both mice and AI systems, neural activity could be partitioned into two distinct components: a “shared neural subspace” containing synchronized patterns between interacting entities, and a “unique neural subspace” containing activity specific to each individual.
Remarkably, GABAergic neurons — inhibitory brain cells that regulate neural activity —showed significantly larger shared neural spaces compared with glutamatergic neurons, which are the brain’s primary excitatory cells. This represents the first investigation of inter-brain neural dynamics in molecularly defined cell types, revealing previously unknown differences in how specific neuron types contribute to social synchronization.
When the same analytical framework was applied to AI agents, shared neural dynamics emerged as the artificial systems developed social interaction capabilities. Most importantly, when researchers selectively disrupted these shared neural components in artificial systems, social behaviors were substantially reduced, providing the direct evidence that synchronized neural patterns causally drive social interactions.
The study also revealed that shared neural dynamics don’t simply reflect coordinated behaviors between individuals, but emerge from representations of each other’s unique behavioral actions during social interaction.
“This discovery fundamentally changes how we think about social behavior across all intelligent systems,” said Weizhe Hong, professor of neurobiology, biological chemistry and bioengineering at UCLA and lead author of the new work. “We’ve shown for the first time that the neural mechanisms driving social interaction are remarkably similar between biological brains and artificial intelligence systems. This suggests we’ve identified a fundamental principle of how any intelligent system — whether biological or artificial — processes social information. The implications are significant for both understanding human social disorders and developing AI that can truly understand and engage in social interactions.”
Continuing research for treating social disorders and training AI
The research team plans to further investigate shared neural dynamics in different and potentially more complex social interactions. They also aim to explore how disruptions in shared neural space might contribute to social disorders and whether therapeutic interventions could restore healthy patterns of inter-brain synchronization. The artificial intelligence framework may serve as a platform for testing hypotheses about social neural mechanisms that are difficult to examine directly in biological systems. They also aim to develop methods to train socially intelligent AI.
The study was led by UCLA’s Hong and Jonathan Kao, associate professor of electrical and computer engineering. Co-first authors Xingjian Zhang and Nguyen Phi, along with collaborators Qin Li, Ryan Gorzek, Niklas Zwingenberger, Shan Huang, John Zhou, Lyle Kingsbury, Tara Raam, Ye Emily Wu and Don Wei contributed to the research.
If someone offers to make an AI video recreation of your wedding, just say no. This is the tough lesson I learned when I started trying to recreate memories with Google’s Gemini Veo model. What started off as a fun exercise ended in disgust.
I grew up in the era before digital capture. We took photos and videos, but most were squirreled away in boxes that we only dragged out for special occasions. Things like the birth of my children and their earliest years were caught on film and 8mm videotape.
When I got married in 1991, we didn’t even have a videographer (mostly a cost issue), so the record of that date is entirely in analog photos.
In general, there was no social media to capture and share my life’s surprising moments; I can’t point someone to Instagram, Facebook, or TikTok and say, “Don’t believe me? Check out this link.”
I do have a decent memory, though, and I wondered if I could combine it with a little AI magic to bring these moments to life.
For my test, I chose a couple of memorable moments from my early career and my 20s in Manhattan. These are 100% true stories that happened to me, but I have no visual record of them.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
For the first one, I described a young, skinny, bespectacled man with curly hair (yes, I once had a full head of curly hair) meeting a famous and Tony Award-winning comedian in Times Square on Broadway. The comedian was Jackie Mason (ask your grandparents), and I wanted his autograph. He stopped, but as I spoke to him and he inexplicably starting quizzing me about which TV to buy, a pigeon pooped on my head. Mason didn’t notice, I kept my composure, and answered.
For the prompt, I painted the scene in broad strokes, describing my business attire, the year – 1989 – and how Mason looked with his curly hair and “cherubic face.” I included the bit of dialogue I could remember, and the action of me touching my head and realizing what happened. Then I fed Veo 3 the prompt.
A few minutes later, I had a decent recreation of the scene, complete with the pigeon. The guy didn’t look that much like me, and the Jackie Mason character bears only a passing resemblance to the once-iconic comedian.
Still, I was emboldened, and searched my memory for another memorable moment from my 20s.
I settled on the time I tried to impress my first boss with my tech skills. His laser printer (yes, kids, they existed in the 1980s) was running low on toner, but I remembered that you could extend the life of a cartridge by removing it from the printer and shaking it. So, that’s what I did, but the cartridge panel was stuck open and I proceeded to shower myself and the office with black toner as my stunned boss looked on.
In my prompt, I described the scene, including the wood-panel walls of the circa 1986 office, and included a brief description of myself and my bald, middle-aged boss who was seated at his desk. The dialogue included me explaining what I could do, saying, “Sorry,” and my boss’s good-natured laugh.
The results this time were even better. Even though neither character looked like their real-world counterparts, the printer, desk, and office were all eerily close to my memory, and the moment when the toner went everywhere was well done.
If I could open my brain and show people my memory of that moment, it might look a little like this. Impressive.
A union too far
Imagining a lifetime of memories rebuilt with AI, I wracked my brain for another core recollection. Then it hit me: my wedding.
It has always bothered us, particularly my wife, that we didn’t have a wedding video. What if I could create one with AI (I know, I know, the foreshadowing is too heavy).
It would not be enough to simply describe a wedding in Veo 3 and get an AI wedding video featuring people who looked nothing like us. I also knew, however, that you could guide an AI with source material. I have a lot of 34-year-old wedding photos. I grabbed a scanned image of one that featured me and my wife shortly after the ceremony, walking hand-in-hand back down the aisle. I liked the image not only because we were clearly represented but also because it featured some of our wedding party and guests.
These are worse than false memories; it’s an active distortion of one of the most important moments in my life.
With the hope of creating a long-sought-after wedding montage (of just eight seconds in duration), I crafted this prompt.
“I need a wedding video montage based on this wedding photo. The video should look like it was shot on HD-quality VHS tape and feature 2 seconds of the ceremony, 2 seconds of everyone dancing, a second of the groom feeding the bride wedding cake, a second of the bride throwing the bouquet, a second of the newlyweds leaving in a limo as everyone waves goodbye.”
Ambitious, I know, but I thought that by giving the model specifics on scene duration, it might squeeze it all in.
Instantly, I hit a speed bump; my Veo 3 Trial didn’t allow me to include a source image. If I wanted to start with a photo, I’d have to step back to Veo 2, which also meant I’d lose audio. That wouldn’t be a big deal, though, because, as described in the prompt, there really isn’t that much dialogue.
It took another few minutes for Veo 2 to spit out a few videos. All of them start with the base image, but to put it plainly, they are very, very wrong.
In each video, the thread of consistency snaps almost instantly, and my wife and I transform into other people. At one point, I’m dancing while holding a cake, and in another, my wife doesn’t know how to let go of the bouquet she’s supposed to throw. We awkwardly feed each other cake and sort of dance together.
The video is horrifying because it looks kind of right but also very wrong. These are worse than false memories; it’s an active distortion of one of the most important moments in my life. I showed the videos to my wife, who was appalled and told me it would give her nightmares.
It was hard to disagree, but I did remind her that the models would improve and a future result would be better. She was unmoved, and looked at me like I had sold one of our children.
What I did is no different from people reanimating photos of dead relatives with My Heritage. Whatever the image starts with, everything after that first millisecond is false, or worse, it’s memory corruption. If you spent any time with that person when they were alive, that’s the true memory. An AI creation is guesswork, and even if it’s good, it’s also fake. They never moved just like that at that specific moment.
In the case of my wedding memories, I realize they’re better left to the gray-matter movie projector in my head.
As for the Veo 3 creations of my other memories, there’s no base image to corrupt. The AI is not recreating my memories as much as it’s become a storytelling tool, another way to illustrate a funny anecdote. That person isn’t me, that man isn’t my old boss, and that’s not Jackie Mason, but you get the gist of the stories. And for that, AI serves its purpose.
This is the last episode of the most meaningful project we’ve ever been part of.
The Amys couldn’t imagine signing off without telling you why the podcast is ending, reminiscing with founding producer Amanda Kersey, and fitting in two final Ask the Amys questions. HBR’s Maureen Hoch is here too, to tell the origin story of the show—because it was her idea, and a good one, right?
Saying goodbye to all the women who’ve listened since 2018 is gut-wrenching. If the podcast made a difference in your life, please bring us to tears/make us smile with an email: womenatwork@hbr.org.
If and when you do that, you’ll receive an auto reply that includes a list of episodes organized by topic. Hopefully that will direct you to perspectives and advice that’ll help you make sense of your experiences, aim high, go after what you need, get through tough times, and take care of yourself. That’s the sort of insight and support we’ve spent the past eight years aiming to give this audience, and you all have in turn given so much back—to the Women at Work team and to one another.
A complete transcript of this episode will be available by July 9.