Connect with us

AI Insights

Police warn of spike in scams involving artificial intelligence

Published

on


Scams involve ‘AI-generated voice messages impersonating senior officials and prominent public figures to steal money and information,’ police say

NEWS RELEASE

ONTARIO PROVINCIAL POLICE

*************************

Members of the Ontario Provincial Police (OPP) and the Canadian Anti-Fraud Centre (CAFC) are continuing to raise the awareness for north Simcoe residents of the various scams that they may encounter on the telephone or online.

Cyber security officials in the Government of Canada are warning Canadians about a spike in malicious cyber activity, where threat actors are using text and AI-generated voice messages impersonating senior officials and prominent public figures to steal money and information.

Canadian authorities have become aware of a malicious cyber campaign targeting business executives and senior public officials. A threat actor is sending malicious links or urgent financial requests using messaging accounts and voice calls that claim to be from senior government officials. In some cases, they are using AI to mimic the officials’ voices to make the calls more convincing.

The Canadian Centre for Cyber Security, a part of the Communications Security Establishment Canada, and its partners have been tracking and monitoring how AI is improving the personalization and persuasiveness of social engineering attacks worldwide for months. The FBI also alerted the public to this threat in April 2025. Canadian officials have recently become aware of similar tactics targeting Canadians in a related or linked campaign.

To read the full advisory, visit Joint Advisory: Cyber officials warn Canadians of malicious campaign to impersonate high-profile public figures.

*************************



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

xAI lays off 500 AI tutors working on Grok

Published

on


Elon Musk’s artificial intelligence startup xAI has laid off 500 workers from its data annotation team, which helps train its Grok chatbot.

The layoffs were earlier reported by Business Insider.

The AI company notified employees over email that it was planning to downsize its team of generalist AI tutors, according to messages viewed by the publication. The company said the “strategic pivot” meant prioritizing specialist AI tutors, while scaling back its focus on general AI tutor roles.

In response to the story, xAI directed reporters to a post on X, in which the company said it plans to expand its specialist AI tutor team by “10X” and intends to open roles on its careers page.

The human data annotator team at xAI plays a key role in teaching Grok to understand the world by labeling, contextualizing, and categorizing raw data used to train the chatbot. The email sent by xAI said that laid-off workers would be paid through either the end of their contract or Nov. 30, but their access to company systems would be terminated the day of the layoff notice.

Prior to the layoff, the xAI’s data annotation team was one of the largest, with 1,500 full-time and contract staff members, which included AI tutors. The reorganization of the data annotators team comes on the back of a leadership shake-up at the team that saw nine employees reportedly exit the firm last week.

As a sign of its changing approach to training Grok, xAI on Thursday asked some of the AI tutors to prepare for tests, Business Insider reported, that covered traditional domains such as STEM, coding, finance, and medicine, as well as quirkier specialties such as Grok’s “personality and model behavior” and doomscrollers.”

Musk launched xAI in 2023 to compete with OpenAI and Google DeepMind, which are racing to win the AI race. He introduced Grok as a safe and truthful alternative to what he accused competitors of building, “woke” chatbots prone to censorship.



Source link

Continue Reading

AI Insights

Google’s newest AI datacenter & its monstrous CO2 emissions

Published

on


The impact of the rise of AI on the environment is a very real concern, and it’s not one that’s going away in a hurry. Especially not when Google’s planned new datacenter in the UK looks set to emit the same quantity of Carbon Dioxide in a year as hundreds of flights every week would.

It comes via a report from The Guardian, which has seen the plans for the new facility and the very real carbon impact assessment.



Source link

Continue Reading

AI Insights

China doubts artificial intelligence use in submarines

Published

on


by Alimat Aliyeva

The integration of artificial intelligence into submarine
warfare may reduce the chances of crew survival by up to 5%,
according to a new report by the South China Morning Post (SCMP),
citing a study led by Meng Hao, a senior engineer at the Chinese
Institute of Helicopter Research and Development,
Azernews reports.

Researchers analyzed an advanced anti-submarine warfare (ASW)
system enhanced by AI, which is designed to detect and track even
the most stealthy submarines. The system relies on real-time
intelligent decision-making, allowing it to respond rapidly and
adaptively to underwater threats. According to the study, only one
out of twenty submarines may be able to avoid detection and attack
under such conditions — a major shift in naval combat dynamics.

“As global powers accelerate the militarization of AI, this
study suggests the era of ‘invisible’ submarines — long considered
the backbone of strategic deterrence — may be drawing to a close,”
SCMP notes.

Historically, stealth has been a submarine’s most valuable
asset, allowing them to operate undetected and deter adversaries
through uncertainty. However, the rise of AI-enabled systems
threatens to upend this balance by minimizing human response
delays, analyzing massive data sets, and predicting submarine
behavior with unprecedented precision.

The implications extend far beyond underwater warfare. In
August, Nick Wakeman, editor-in-chief of Defense One, reported that
the U.S. Army is also exploring AI for use in air operations
control systems. AI could enhance resilience to electronic warfare,
enable better integration of drones, and support the deployment of
autonomous combat platforms in contested airspace.

The growing role of AI in modern militaries — from the seabed to
the stratosphere — raises new questions not only about tactical
advantage, but also about ethical decision-making, autonomous
weapons control, and the future of human involvement in combat
scenarios.

As nations continue investing in next-generation warfare
technology, experts warn that AI may not just change how wars are
fought — it could redefine what survivability means on the modern
battlefield.



Source link

Continue Reading

Trending