Connect with us

AI Research

Hacker Used Claude AI Agent To Automate Attack Chain

Published

on


A hacker used a popular artificial intelligence chatbot to run a cybercriminal operation that weaponized AI—deploying Claude AI Code not just as a copilot, but as the driver of an entire attack chain.

In a campaign, detailed in Antropic AI’s August threat intelligence report, an attacker leveraged Claude Code, Anthropic’s AI coding agent, to run strike operations against 17 distinct organizations in sectors like healthcare, emergency services, government, and religious institutions. But this wasn’t a typical ransomware blitz—it was an orchestrated, AI-driven extortion campaign with strategic and automated execution.

Rather than encrypting data, the attacker threatened to publicly expose stolen information, sometimes demanding ransom payments exceeding $500,000. Anthropic dubs this approach “vibe hacking,” and it’s a paradigm shift. Why? The AI agent handled reconnaissance, credential harvesting, penetration, ransom calculation and even the design of psychologically tailored extortion messages—all with minimal human intervention.

How Claude Took the Wheel

Claude Code scanned thousands of VPN endpoints, identified vulnerable hosts, and initiated network intrusions. The AI helped collect, profile and prioritize extricable data including personal, financial and medical records of the victim organizations.

Claude then also analyzed stolen financial datasets to determine optimal ransom levels. It designed extortion documents with visually alarming HTML visuals that were integrated directly into victim machines.

The AI agent finally generated obfuscated tunneling tools including modified versions of Chisel and developed new proxy methods. Upon detection, it even crafted anti-debugging routines and filename masquerading to evade defensive scanners.

A Dangerous Trend in AI-Powered Cybercrime

As Anthropic notes, this marks a fundamental shift. AI is no longer a support tool but soon becoming a standalone attacker, capable of running multi-stage cyber campaigns. The report makes clear this threat model significantly lowers technical barriers to large-scale cybercrime. Anyone skilled with prompts can now launch complex, tailored, autonomous attacks—something the report predicts will only grow more common.

Antropic also suggested “a need for new frameworks for evaluating cyber threats that account for AI enablement.”

Anthropic responded by banning the actor’s accounts, rolling out a tailored detection classifier, and sharing technical indicators with partners to avoid similar future abuse.

Anthropic’s report details other misuses of Claude including North Korea’s fake IT worker scam, deploying AI-generated personas for employment fraud, as well as emerging “ransomware-as-a-service” offerings generated via AI by actors with no coding expertise.

Also read: US, Japan, South Korea Meet Private Partners to Combat North Korea’s IT Work Fraud Scheme



Source link

AI Research

(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment – The Standard (HK)

Published

on



(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment  The Standard (HK)



Source link

Continue Reading

AI Research

[2506.08171] Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models


View a PDF of the paper titled Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models, by Daniel Koh and 4 other authors

View PDF
HTML (experimental)

Abstract:Large language models (LLMs) have demonstrated strong performance on coding tasks such as generation, completion and repair, but their ability to handle complex symbolic reasoning over code still remains underexplored. We introduce the task of worst-case symbolic constraints analysis, which requires inferring the symbolic constraints that characterise worst-case program executions; these constraints can be solved to obtain inputs that expose performance bottlenecks or denial-of-service vulnerabilities in software systems. We show that even state-of-the-art LLMs (e.g., GPT-5) struggle when applied directly on this task. To address this challenge, we propose WARP, an innovative neurosymbolic approach that computes worst-case constraints on smaller concrete input sizes using existing program analysis tools, and then leverages LLMs to generalise these constraints to larger input sizes. Concretely, WARP comprises: (1) an incremental strategy for LLM-based worst-case reasoning, (2) a solver-aligned neurosymbolic framework that integrates reinforcement learning with SMT (Satisfiability Modulo Theories) solving, and (3) a curated dataset of symbolic constraints. Experimental results show that WARP consistently improves performance on worst-case constraint reasoning. Leveraging the curated constraint dataset, we use reinforcement learning to fine-tune a model, WARP-1.0-3B, which significantly outperforms size-matched and even larger baselines. These results demonstrate that incremental constraint reasoning enhances LLMs’ ability to handle symbolic reasoning and highlight the potential for deeper integration between neural learning and formal methods in rigorous program analysis.

Submission history

From: Daniel Koh [view email]
[v1]
Mon, 9 Jun 2025 19:33:30 UTC (1,462 KB)
[v2]
Tue, 16 Sep 2025 10:35:33 UTC (1,871 KB)



Source link

Continue Reading

AI Research

Spatially-Aware Image Focus for Visual Reasoning


View a PDF of the paper titled SIFThinker: Spatially-Aware Image Focus for Visual Reasoning, by Zhangquan Chen and 6 other authors

View PDF
HTML (experimental)

Abstract:Current multimodal large language models (MLLMs) still face significant challenges in complex visual tasks (e.g., spatial understanding, fine-grained perception). Prior methods have tried to incorporate visual reasoning, however, they fail to leverage attention correction with spatial cues to iteratively refine their focus on prompt-relevant regions. In this paper, we introduce SIFThinker, a spatially-aware “think-with-images” framework that mimics human visual perception. Specifically, SIFThinker enables attention correcting and image region focusing by interleaving depth-enhanced bounding boxes and natural language. Our contributions are twofold: First, we introduce a reverse-expansion-forward-inference strategy that facilitates the generation of interleaved image-text chains of thought for process-level supervision, which in turn leads to the construction of the SIF-50K dataset. Besides, we propose GRPO-SIF, a reinforced training paradigm that integrates depth-informed visual grounding into a unified reasoning pipeline, teaching the model to dynamically correct and focus on prompt-relevant regions. Extensive experiments demonstrate that SIFThinker outperforms state-of-the-art methods in spatial understanding and fine-grained visual perception, while maintaining strong general capabilities, highlighting the effectiveness of our method. Code: this https URL.

Submission history

From: Zhangquan Chen [view email]
[v1]
Fri, 8 Aug 2025 12:26:20 UTC (5,223 KB)
[v2]
Thu, 14 Aug 2025 10:34:22 UTC (5,223 KB)
[v3]
Sun, 24 Aug 2025 13:04:46 UTC (5,223 KB)
[v4]
Tue, 16 Sep 2025 09:40:13 UTC (5,223 KB)



Source link

Continue Reading

Trending