AI Insights
When Cybercriminals Weaponize Artificial Intelligence at Scale

Anthropic’s August threat intelligence report sounds like a cybersecurity novel, except it’s terrifyingly not fiction. The report describes how cybercriminals used Claude AI to orchestrate and attack 17 organizations with ransom demands exceeding $500,000. This may be the most sophisticated AI-driven attack campaign to date.
But beyond the alarming headlines lies a more fundamental swing – the emergence of “agentic cybercrime,” where AI doesn’t just assist attackers, it becomes their co-pilot, strategic advisor, and operational commander all at once.
The End of Traditional Cybercrime Economics
The Anthropic report highlights a cruel reality that IT leaders have long feared. The economics of cybercrime have undergone significant change. What previously required teams of specialized attackers working for weeks can now be accomplished by a single individual in a matter of hours with AI assistance.
For example, the “vibe hacking” operation is detailed in the report. One cybercriminal used Claude Code to automate reconnaissance across thousands of systems, create custom malware with anti-detection capabilities, perform real-time network penetration, and analyze stolen financial data to calculate psychologically optimized ransom amounts.
More than just following instructions, the AI made tactical decisions about which data to exfiltrate and crafted victim-specific extortion strategies that maximized psychological pressure.
Sophisticated Attack Democratization
One of the most unnerving revelations in Anthropic’s report involves North Korean IT workers who have infiltrated Fortune 500 companies using AI to simulate technical competence they don’t have. While these attackers are unable to write basic code or communicate professionally in English, they’re successfully maintaining full-time engineering positions at major corporations thanks to AI handling everything from technical interviews to daily work deliverables.
The report also discloses that 61 percent of the workers’ AI usage focused on frontend development, 26 percent on programming tasks, and 10 percent on interview preparation. They are essentially human proxies for AI systems, channeling hundreds of millions of dollars to North Korea’s weapons programs while their employers remain unaware.
Similarly, the report reveals how criminals with little technical skill are developing and selling sophisticated ransomware-as-a-service packages for $400 to $1,200 on dark web forums. Features that previously required years of specialized knowledge, such as ChaCha20 encryption, anti-EDR techniques, and Windows internals exploitation, are now generated on demand with the aid of AI.
Defense Speed Versus Attack Velocity
Traditional cybersecurity operates on human timetables, with threat detection, analysis, and response cycles measured in hours or days. AI-powered attacks, on the other hand, operate at machine speed, with reconnaissance, exploitation, and data exfiltration occurring in minutes.
The cybercriminal highlighted in Anthropic’s report automated network scanning across thousands of endpoints, identified vulnerabilities with “high success rates,” and crossed through compromised networks faster than human defenders could respond. When initial attack vectors failed, the AI immediately generated alternative attacks, creating a dynamic adversary that adapted in real-time.
This speed delta creates an impossible situation for traditional security operations centers (SOCs). Human analysts cannot keep up with the velocity and persistence of AI-augmented attackers operating 24/7 across multiple targets simultaneously.
Asymmetry of Intelligence
What makes these AI-powered attacks particularly dangerous isn’t only their speed – it’s their intelligence. The criminals highlighted in the report utilized AI to analyze stolen data and develop “profit plans” by incorporating multiple monetization strategies. Claude evaluated financial records to gauge optimal ransom amounts, analyzed organizational structures to locate key decision-makers, and crafted sector-specific threats based on regulatory vulnerabilities.
This level of strategic thinking, combined with operational execution, has created a new category of threats. These aren’t script-based armatures using predefined playbooks; they’re adaptive adversaries that learn and evolve throughout each campaign.
The Acceleration of the Arms Race
The current challenge is summed up as: “All of these operations were previously possible but would have required dozens of sophisticated people weeks to carry out the attack. Now all you need is to spend $1 and generate 1 million tokens.”
The asymmetry is significant. Human defenders must deal with procurement cycles, compliance requirements, and organizational approval before deploying new security technologies. Cybercriminals simply create new accounts when existing ones are blocked – a process that takes about “13 seconds.”
But this predicament also presents an opportunity. The same AI functions being weaponized can be harnessed for defenses, and in many cases defensive AI has natural advantages.
Attackers can move fast, but defenders have access to something criminals don’t – historical data, organizational context, and the ability to establish baseline behaviors across entire IT environments. AI defense systems can monitor thousands of endpoints simultaneously, correlate subtle anomalies across network traffic, and respond to threats faster than human attackers can ever hope to.
Modern AI security platforms, such as the AI SOC Agent that works like an AI SOC Analyst, have proven this principle in practice. By automating alert triage, investigation, and response processes, these systems process security events at machine speed while maintaining the context and judgment that pure automation lacks.
Defensive AI doesn’t need to be perfect; it just needs to be faster and more persistent than human attackers. When combined with human expertise for strategic oversight, this creates a formidable defensive posture for organizations.
Building AI-Native Security Operations
The Anthropic report underscores how incremental improvements to traditional security tools won’t matter against AI-augmented adversaries. Organizations need AI-native security operations that match the scale, speed, and intelligence of modern AI attacks.
This means leveraging AI agents that autonomously investigate suspicious activities, correlate threat intelligence across multiple sources, and respond to attacks faster than humans can. It requires SOCs that use AI for real-time threat hunting, automated incident response, and continuous vulnerability assessment.
This new approach demands a shift from reactive to predictive security postures. AI defense systems must anticipate attack vectors, identify potential compromises before they fully manifest, and adapt defensive strategies based on emerging threat patterns.
The Anthropic report clearly highlights that attackers don’t wait for a perfect tool. They train themselves on existing capabilities and can cause damage every day, even if the AI revolution were to stop. Organizations cannot afford to be more cautious than their adversaries.
The AI cybersecurity arms race is already here. The question isn’t whether organizations will face AI-augmented attacks, but if they’ll be prepared when those attacks happen.
Success demands embracing AI as a core component of security operations, not an experimental add-on. It means leveraging AI agents that operate autonomously while maintaining human oversight for strategic decisions. Most importantly, it requires matching the speed of adoption that attackers have already achieved.
The cybercriminals highlighted in the Anthropic report represent the new threat landscape. Their success demonstrates the magnitude of the challenge and the urgency of the needed response. In this new reality, the organizations that survive and thrive will be those that adopt AI-native security operations with the same speed and determination that their adversaries have already demonstrated.
The race is on. The question is whether defenders will move fast enough to win it.
AI Insights
Why California again backed off on sweeping AI regulation
By Khari Johnson, CalMatters

This story was originally published by CalMatters. Sign up for their newsletters.
After three years of trying to give Californians the right to know when AI is making a consequential decision about their lives and to appeal when things go wrong, Assemblymember Rebecca Bauer-Kahan said she and her supporters will have to wait again, until next year.
The San Ramon Democrat announced Friday that Assembly Bill 1018, which cleared the Assembly and two Senate committees, has been designated a two-year bill, meaning it can return as part of the legislative session next year. That move will allow more time for conversations with Gov. Gavin Newsom and more than 70 opponents. The decision came in the final hours of the California Legislative session, which ends today.
Her bill would require businesses and government agencies to alert individuals when automated systems are used to make important decisions about them, including for apartment leases, school admissions, and, in the workplace, hiring, firing, promotions, and disciplinary actions. The bill also covers decisions made in education, health care, criminal justice, government benefits, financial services, and insurance.
Automated systems that assign people scores or make recommendations can stop Californians from receiving unemployment benefits they’re entitled to, declare job applicants less qualified for arbitrary reasons that have nothing to do with job performance, or deny people health care or a mortgage because of their race.
“This pause reflects our commitment to getting this critical legislation right, not a retreat from our responsibility to protect Californians,” Bauer-Kahan said in a statement shared with CalMatters.
Bauer-Kahan adopted the principles enshrined in the legislation from the Biden administration’s AI Bill of Rights. California has passed more AI regulation than any other state, but has yet to adopt a law like Bauer-Kahan’s or like other laws requiring disclosure of consequential AI decisions like the Colorado AI Act or European Union’s AI Act.
The pause comes at a time when politicians in Washington D.C. continue to oppose AI regulation that they say could stand in the way of progress. Last week, leaders of the nation’s largest tech companies joined President Trump at a White House dinner to further discuss a recent executive order and other initiatives to prevent AI regulation. Earlier this year, Congress tried and failed to pass a moratorium on AI regulation by state governments.
When an automated system makes an error, AB 1018 gives people the right to have that mistake rectified within 60 days. It also reiterates that algorithms must give “full and equal” accommodations to everyone, and cannot discriminate against people based on characteristics like age, race, gender, disability, or immigration status. Developers must carry out impact assessments to, among other things, test for bias embedded in their systems. If an impact assessment is not conducted on an AI system, and that system is used to make consequential decisions about people’s lives, the developer faces fines of up to $25,000 per violation, or legal action by the attorney general, public prosecutors, or the Civil Rights Department.
Amendments made to the bill in recent weeks exempted generative AI models from coverage under the bill, which could prevent it from impacting major AI companies or ongoing generative AI pilot projects carried out by state agencies. The bill was also amended to delay a developer auditing requirement to 2030, and to clarify that the bill intends to address evaluating a person and making predictions or recommendations about them.
An intense legislative fight
Samantha Gordon, a chief program officer at TechEquity, a sponsor of the bill, said she’s seen more lobbyists attempt to kill AB 1018 this week in the California Senate than for any other AI bill ever. She said she thinks AB 1018 had a pathway to passage but the decision was made to pause in order to work with the governor, who ends his second and final term next year.
“There’s a fundamental disagreement about whether or not these tools should face basic scrutiny of testing and informing the public that they’re being used,” Gordon said.
Gordon thinks it’s possible tech companies will use their “unlimited amount of money” to fight the bill next year..
“But it’s clear,” she added, “that Americans want these protections — poll after poll shows Americans want strong laws on AI and that voluntary protections are insufficient.”
AB 1018 faced opposition from industry groups, big tech companies, the state’s largest health care provider, venture capital firms, and the Judicial Council of California, a policymaking body for state courts.
A coalition of hospitals, Kaiser Permanente, and health care software and AI company Epic Systems urged lawmakers to vote no on 1018 because they argued the bill would negatively influence patient care, increase costs, and require developers to contract with third-party auditors to assess compliance by 2030.
A coalition of business groups opposed the bill because of generalizing language and concern that compliance could be expensive for businesses and taxpayers. The group Technet, which seeks to shape policy nationwide and whose members include companies like Apple, Google, Nvidia, and OpenIAI, argued that AB 1018 would stifle job growth, raise costs, and punish the fastest growing industries in the state in a video ad campaign.
Venture capital firm Andreessen Horowitz, whose founder Marc Andreessen supported the re-election of President Trump, oppose the bill due to costs and due to the fact that the bill seeks to regulate AI in California and beyond.
A policy leader in the state judiciary said in an alert sent to lawmakers urging a no vote this week that the burden of compliance with the bill is so great that the judicial branch is at risk of losing the ability to use pretrial risk assessment tools like the kind that assign recidivism scores to sex offenders and violent felons. The state Judicial Council, which makes policy for California courts, estimates that AB 1018 passage will cost the state up to $300 million a year. Similar points were made in a letter to lawmakers last month.
Why backers keep fighting
Exactly how much AB 1018 could cost taxpayers is still a big unknown, due to contradictory information from state government agencies. An analysis by California legislative staff found that if the bill passes it could cost local agencies, state agencies, and the state judicial branch hundreds of millions of dollars. But a California Department of Technology report covered exclusively by CalMatters concluded in May that no state agencies use high risk automated systems, despite historical evidence to the contrary. Bauer-Kahan said last month that she was surprised by the financial impact estimates because CalMatters reporting found that automated decisionmaking system use was not widespread at the state level.
Support for the bill has come from unions who pledged to discuss AI in bargaining agreements, including the California Nurses Association and the Service Employees International Union, and from groups like the Citizen’s Privacy Coalition, Consumer Reports, and the Consumer Federation of California.
Coauthors of AB 1018 include major Democratic proponents of AI regulation in the California Legislature, including Assembly majority leader Cecilia Aguilar-Curry of Davis, author of a bill passed and on the governor’s desk that seeks to stop algorithms from raising prices on consumer goods; Chula Vista Senator Steve Padilla, whose bill to protect kids from companion chatbots awaits the governor’s decision; and San Diego Assemblymember Chris Ward, who previously helped pass a law requiring state agencies to disclose use of high-risk automated systems and this year sought to pass a bill to prevent pricing based on your personal information.
The anti-discrimination language in AB 1018 is important because tech companies and their customers often see themselves as exempt from discrimination law if the discrimination is done by automated systems, said Inioluwa Deborah Raji, an AI researcher at UC Berkeley who has audited algorithms for discrimination and advised government officials in Sacramento and Washington D.C. about how AI can harm people. She questions whether state agencies have the resources to enforce AB 1018, but also likes the disclosure requirement in the bill because “I think people deserve to know, and there’s no way that they can appeal or contest without it.”
“I need to know that an AI system was the reason I wasn’t able to rent this house. Then I can at an individual level appeal and contest. There’s something very valuable about that.”
Raji said she witnessed corporate influence and pushback when she helped shape a report about how California can balance guardrails and innovation for generative AI development, and she sees similar forces at play in the delay of AB 1018.
“It’s disappointing this [AB 1018] isn’t the priority for AI policy folks at this time,” she told CalMatters. “I truly hope the fourth time is the charm.”
A number of other bills with union backing were also considered by lawmakers this session that sought to protect workers from artificial intelligence. For the third year in a row, a bill to require a human driver in commercial delivery trucks in autonomous vehicles failed to become law. Assembly Bill 1331, which sought to prevent surveillance of workers with AI-powered tools in private spaces like locker or lactation rooms and placed limitations on surveillance in breakrooms, also failed to pass.
But another measure, Senate Bill 7 passed the legislature and is headed to the governor. It requires employers to disclose plans to use an automated system 30 days prior to doing so and make annual requests data used by an employer for discipline or firing. In recent days, author Senator Jerry McNerney amended the law to remove the right to appeal decisions made by AI and eliminate a prohibition against employers making predictions about a worker’s political beliefs, emotional state, or neural data. The California Labor Federation supported similar bills in Massachusetts, Vermont, Connecticut, and Washington.
This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.
AI Insights
Best Artificial Intelligence Stocks To Keep An Eye On – September 12th – MarketBeat
AI Insights
Malaysia and Zetrix AI Partner to Build Global Standards for Shariah-Compliant Artificial Intelligence
JOHOR BAHRU, Malaysia, Sept. 13, 2025 /PRNewswire/ — In a significant step towards islamic values-based artificial intelligence, Zetrix AI Berhad, developer of the world’s first Shariah-aligned Large Language Model (LLM) NurAi and the Government of Malaysia, through the Prime Minister’s Department (Religious Affairs), today signed a Letter of Intent (LOI) to collaborate on establishing the foremost global framework for Shariah compliance, certification and governance in AI. The ceremony was witnessed by Prime Minister YAB Dato’ Seri Anwar Ibrahim.
Building Trust in NurAI
Front row: Datuk Mohd Jimmy Wong Abdullah, Director of Zetrix AI Berhad (left) and Dato’ Dr. Sirajuddin Suhaimee, Director General of Department of Islamic Development Malaysia (JAKIM) (right), during the signing of the Letter of Intent between Zetrix AI Berhad and the Government of Malaysia, through the Prime Minister’s Department (Religious Affairs). Back row, from the left: The signing was witnessed by YB Tuan Haji Mohd Fared bin Khalid, Chairman of the Johor State Islamic Religious Affairs Committee; YB Dato’ Haji Asman Shah bin Abd. Rahman, Secretary of the Johor State Government; YAB Dato’ Onn Hafiz bin Ghazi, Chief Minister of Johor; YAB Dato’ Seri Anwar bin Ibrahim, Prime Minister of Malaysia; and YB Senator Dato’ Setia Dr. Haji Mohd Na’im bin Haji Mokhtar, Minister in the Prime Minister’s Department (Religious Affairs).
JAKIM, Malaysia’s Department of Islamic Development, is internationally recognised as the gold standard in halal certification, accrediting foreign certification bodies across nearly 50 countries. Malaysia has consistently ranked first in the Global Islamic Economy Indicator, reflecting its leadership not only in halal certification but also in Islamic finance, food and education. By integrating emerging technologies such as AI and blockchain to enhance compliance and monitoring, Malaysia continues to set holistic benchmarks for the global Islamic economy.
NurAI has already established itself as a pioneering Shariah-aligned AI platform. With today’s collaboration, JAKIM, under the Ministry’s leadership, would play a central role in guiding the certification, governance and ethical standards of NurAI, ensuring its alignment with Islamic principles.
Additionally, this milestone underscores the urgent need for AI systems that move beyond secular or foreign-centric worldviews, offering instead a platform rooted in Islamic ethics. It positions Malaysia as a global leader in ethical and Shariah-compliant AI while setting international benchmarks. The initiative also reflects the country’s halal and digitalisation agendas, ensuring AI remains trusted, secure, and representative of Muslim values while serving more than 2 billion people worldwide.
Prime Minister YAB Dato’ Seri Anwar Ibrahim reinforced that national policies should incorporate various inputs, including digitalisation and artificial intelligence — and must always remain grounded in islamic principles and values that deserve emphasis.
Areas of Collaboration
Through the LOI, Zetrix AI and the Government via JAKIM, propose to collaborate in three key areas:
- Shariah Certification and Governance — Developing frameworks, ethical guidelines and certification standards for AI systems rooted in Islamic principles.
- Global Advocacy and Promotion — Positioning Malaysia as the global centre of excellence for Islamic AI and championing the Islamic digital economy projected at USD 5.74 trillion by 2030.
- JAKIM’s Official Channel on NurAI — Creating a trusted platform for Islamic legal rulings, halal certification and verified Shariah guidance, combating misinformation through AI.
Reinforcing Global Halal Tech Leadership
Through this collaboration, NurAI demonstrates how advanced AI can be guided by ethical and faith-based principles to serve global communities. By extending halal leadership into the digital economy particularly in Islamic finance, education and law — Malaysia positions itself as a key contributor to setting international benchmarks for Shariah-compliant AI.
Inclusive, Secure and Cost-Effective AI
NurAI is developed in Malaysia, supporting Bahasa Melayu, English, Indonesian and Arabic. It complies with national data sovereignty and cybersecurity policies, reducing reliance on foreign tools while ensuring AI knowledge stays local, trusted, and secure.
NurAI is available for download on nur-ai.zetrix.com
About Zetrix AI Berhad
Zetrix AI Berhad (“Zetrix AI”), formerly known as MY E.G. Services Berhad, is leading the way in the deployment of blockchain technology and artificial intelligence in powering the public and private sectors across ASEAN. Headquartered in Malaysia, Zetrix AI started operations in 2000 as a pioneer in the provision of electronic government services and complementary commercial offerings in its home country. Today, it has advanced to the forefront of technology transformation in the broader region, leveraging its Layer-1 blockchain platform Zetrix and embracing the convergence of Web3, AI and robotics to enable optimally-efficient, intelligent and secure cross-border transactions, digital identity interoperability and automation solutions that seamlessly connect peoples, businesses and governments.
SOURCE Zetrix AI Berhad
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries