Scams involve ‘AI-generated voice messages impersonating senior officials and prominent public figures to steal money and information,’ police say
NEWS RELEASE
ONTARIO PROVINCIAL POLICE
*************************
Members of the Ontario Provincial Police (OPP) and the Canadian Anti-Fraud Centre (CAFC) are continuing to raise the awareness for north Simcoe residents of the various scams that they may encounter on the telephone or online.
Cyber security officials in the Government of Canada are warning Canadians about a spike in malicious cyber activity, where threat actors are using text and AI-generated voice messages impersonating senior officials and prominent public figures to steal money and information.
Canadian authorities have become aware of a malicious cyber campaign targeting business executives and senior public officials. A threat actor is sending malicious links or urgent financial requests using messaging accounts and voice calls that claim to be from senior government officials. In some cases, they are using AI to mimic the officials’ voices to make the calls more convincing.
The Canadian Centre for Cyber Security, a part of the Communications Security Establishment Canada, and its partners have been tracking and monitoring how AI is improving the personalization and persuasiveness of social engineering attacks worldwide for months. The FBI also alerted the public to this threat in April 2025. Canadian officials have recently become aware of similar tactics targeting Canadians in a related or linked campaign.
The AI company notified employees over email that it was planning to downsize its team of generalist AI tutors, according to messages viewed by the publication. The company said the “strategic pivot” meant prioritizing specialist AI tutors, while scaling back its focus on general AI tutor roles.
In response to the story, xAI directed reporters to a post on X, in which the company said it plans to expand its specialist AI tutor team by “10X” and intends to open roles on its careers page.
The human data annotator team at xAI plays a key role in teaching Grok to understand the world by labeling, contextualizing, and categorizing raw data used to train the chatbot. The email sent by xAI said that laid-off workers would be paid through either the end of their contract or Nov. 30, but their access to company systems would be terminated the day of the layoff notice.
Prior to the layoff, the xAI’s data annotation team was one of the largest, with 1,500 full-time and contract staff members, which included AI tutors. The reorganization of the data annotators team comes on the back of a leadership shake-up at the team that saw nine employees reportedly exit the firm last week.
As a sign of its changing approach to training Grok, xAI on Thursday asked some of the AI tutors to prepare for tests, Business Insider reported, that covered traditional domains such as STEM, coding, finance, and medicine, as well as quirkier specialties such as Grok’s “personality and model behavior” and doomscrollers.”
Musk launched xAI in 2023 to compete with OpenAI and Google DeepMind, which are racing to win the AI race. He introduced Grok as a safe and truthful alternative to what he accused competitors of building, “woke” chatbots prone to censorship.
The impact of the rise of AI on the environment is a very real concern, and it’s not one that’s going away in a hurry. Especially not when Google’s planned new datacenter in the UK looks set to emit the same quantity of Carbon Dioxide in a year as hundreds of flights every week would.
It comes via a report from The Guardian, which has seen the plans for the new facility and the very real carbon impact assessment.
According to the report, the datacenter — which, as yet, has not been given planning consent to proceed — will eject over half a million tonnes of CO2 over the course of a year. This is equivalent to 500 flights from the UK to Spain, every week of the year, to give it some real world context.
Google says, in the planning documents, this is a “minor adverse and not significant impact when compared to the UK carbon budgets.” Is it, though? And that’s before considering how much water will be needed to cool it.
Google isn’t the only company with plans to invest into AI infrastructure in the UK, either. During this week’s state visit of President Trump, NVIDIA and OpenAI CEOs will be attending to announce their respective involvements in what is being coined the “British Stargate” in the north of the country.
OpenAI CEO, Sam Altman, will be part of Trump’s state visit to the UK, announcing investments in the country’s AI infrastructure. (Image credit: Getty Images | Andrew Harnik)
Campaigners are naturally concerned, but AI is here to stay and every corner of the globe wants in on it. If you don’t grow, you get left behind. As a Brit, I’m happy enough to see that we’re not being left behind on the biggest new tech frontier of the generation, but I appreciate and share the concerns on its impact.
The British government does not believe datacentres will have a significant impact on the UK’s carbon budget because of its ambitious targets for electricity grid decarbonisation. Rather it is worried that without massive investment in new datacentres, the UK will fall behind international rivals, including France, resulting in a “compute gap” that “risks undermining national security, economic growth, and the UK’s ambition to lead in AI.
Department for Science, Innovation and Technology (via The Guardian)
For one, I’m almost chuckling at the idea of increasing datacenters for AI, which seems to directly conflict with the country’s ‘promise’ to reach net-zero by 2050. I’m not sure how pumping vast quantities of CO2 into the atmosphere is going to work hand-in-hand with that one, but maybe we’re yet to see how it will be balanced out. I’m prepared to be wowed!
All the latest news, reviews, and guides for Windows and Xbox diehards.
There’s also the energy requirement. As outlined in The Guardian’s report, datacenters already account for 2.5% of the country’s electricity use, and the more of these mega-facilities that come online, the higher that percentage will grow. I can tell you from experience, electricity here does not come cheap.
The AI revolution continues, though, and Trump or no Trump, it’s clear the United States is putting itself at the forefront. For us Brits, we’ll have to buy in most of it, but at least we’ll be around the table. Even if there are questions to be answered about what it’s doing to the planet.
The integration of artificial intelligence into submarine
warfare may reduce the chances of crew survival by up to 5%,
according to a new report by the South China Morning Post (SCMP),
citing a study led by Meng Hao, a senior engineer at the Chinese
Institute of Helicopter Research and Development, Azernews reports.
Researchers analyzed an advanced anti-submarine warfare (ASW)
system enhanced by AI, which is designed to detect and track even
the most stealthy submarines. The system relies on real-time
intelligent decision-making, allowing it to respond rapidly and
adaptively to underwater threats. According to the study, only one
out of twenty submarines may be able to avoid detection and attack
under such conditions — a major shift in naval combat dynamics.
“As global powers accelerate the militarization of AI, this
study suggests the era of ‘invisible’ submarines — long considered
the backbone of strategic deterrence — may be drawing to a close,”
SCMP notes.
Historically, stealth has been a submarine’s most valuable
asset, allowing them to operate undetected and deter adversaries
through uncertainty. However, the rise of AI-enabled systems
threatens to upend this balance by minimizing human response
delays, analyzing massive data sets, and predicting submarine
behavior with unprecedented precision.
The implications extend far beyond underwater warfare. In
August, Nick Wakeman, editor-in-chief of Defense One, reported that
the U.S. Army is also exploring AI for use in air operations
control systems. AI could enhance resilience to electronic warfare,
enable better integration of drones, and support the deployment of
autonomous combat platforms in contested airspace.
The growing role of AI in modern militaries — from the seabed to
the stratosphere — raises new questions not only about tactical
advantage, but also about ethical decision-making, autonomous
weapons control, and the future of human involvement in combat
scenarios.
As nations continue investing in next-generation warfare
technology, experts warn that AI may not just change how wars are
fought — it could redefine what survivability means on the modern
battlefield.